00:00:00.001 Started by upstream project "autotest-nightly" build number 4276 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3639 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.098 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.132 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.167 Using shallow fetch with depth 1 00:00:00.167 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.167 > git --version # timeout=10 00:00:00.204 > git --version # 'git version 2.39.2' 00:00:00.204 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.795 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.808 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.822 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.822 > git config core.sparsecheckout # timeout=10 00:00:04.833 > git read-tree -mu HEAD # timeout=10 00:00:04.849 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.870 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.870 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.960 [Pipeline] Start of Pipeline 00:00:04.973 [Pipeline] library 00:00:04.975 Loading library shm_lib@master 00:00:04.975 Library shm_lib@master is cached. Copying from home. 00:00:04.992 [Pipeline] node 00:00:05.018 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.020 [Pipeline] { 00:00:05.027 [Pipeline] catchError 00:00:05.028 [Pipeline] { 00:00:05.036 [Pipeline] wrap 00:00:05.042 [Pipeline] { 00:00:05.047 [Pipeline] stage 00:00:05.048 [Pipeline] { (Prologue) 00:00:05.061 [Pipeline] echo 00:00:05.062 Node: VM-host-SM38 00:00:05.066 [Pipeline] cleanWs 00:00:05.076 [WS-CLEANUP] Deleting project workspace... 00:00:05.076 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.082 [WS-CLEANUP] done 00:00:05.314 [Pipeline] setCustomBuildProperty 00:00:05.389 [Pipeline] httpRequest 00:00:05.726 [Pipeline] echo 00:00:05.728 Sorcerer 10.211.164.20 is alive 00:00:05.735 [Pipeline] retry 00:00:05.736 [Pipeline] { 00:00:05.746 [Pipeline] httpRequest 00:00:05.749 HttpMethod: GET 00:00:05.750 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.750 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.761 Response Code: HTTP/1.1 200 OK 00:00:05.761 Success: Status code 200 is in the accepted range: 200,404 00:00:05.762 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.326 [Pipeline] } 00:00:07.335 [Pipeline] // retry 00:00:07.341 [Pipeline] sh 00:00:07.625 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.640 [Pipeline] httpRequest 00:00:07.990 [Pipeline] echo 00:00:07.992 Sorcerer 10.211.164.20 is alive 00:00:07.999 [Pipeline] retry 00:00:08.000 [Pipeline] { 00:00:08.011 [Pipeline] httpRequest 00:00:08.016 HttpMethod: GET 00:00:08.016 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:08.017 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:08.034 Response Code: HTTP/1.1 200 OK 00:00:08.035 Success: Status code 200 is in the accepted range: 200,404 00:00:08.036 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:22.087 [Pipeline] } 00:01:22.104 [Pipeline] // retry 00:01:22.111 [Pipeline] sh 00:01:22.399 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:24.956 [Pipeline] sh 00:01:25.240 + git -C spdk log --oneline -n5 00:01:25.241 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:25.241 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:25.241 4bcab9fb9 correct kick for CQ full case 00:01:25.241 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:25.241 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:25.261 [Pipeline] writeFile 00:01:25.276 [Pipeline] sh 00:01:25.564 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:25.578 [Pipeline] sh 00:01:25.864 + cat autorun-spdk.conf 00:01:25.864 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.864 SPDK_TEST_NVME=1 00:01:25.864 SPDK_TEST_FTL=1 00:01:25.864 SPDK_TEST_ISAL=1 00:01:25.864 SPDK_RUN_ASAN=1 00:01:25.864 SPDK_RUN_UBSAN=1 00:01:25.864 SPDK_TEST_XNVME=1 00:01:25.864 SPDK_TEST_NVME_FDP=1 00:01:25.864 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.873 RUN_NIGHTLY=1 00:01:25.875 [Pipeline] } 00:01:25.889 [Pipeline] // stage 00:01:25.905 [Pipeline] stage 00:01:25.907 [Pipeline] { (Run VM) 00:01:25.920 [Pipeline] sh 00:01:26.207 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:26.207 + echo 'Start stage prepare_nvme.sh' 00:01:26.207 Start stage prepare_nvme.sh 00:01:26.207 + [[ -n 2 ]] 00:01:26.207 + disk_prefix=ex2 00:01:26.207 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:26.207 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:26.207 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:26.207 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.207 ++ SPDK_TEST_NVME=1 00:01:26.207 ++ SPDK_TEST_FTL=1 00:01:26.207 ++ SPDK_TEST_ISAL=1 00:01:26.207 ++ SPDK_RUN_ASAN=1 00:01:26.207 ++ SPDK_RUN_UBSAN=1 00:01:26.207 ++ SPDK_TEST_XNVME=1 00:01:26.207 ++ SPDK_TEST_NVME_FDP=1 00:01:26.207 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.207 ++ RUN_NIGHTLY=1 00:01:26.207 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:26.207 + nvme_files=() 00:01:26.207 + declare -A nvme_files 00:01:26.207 + backend_dir=/var/lib/libvirt/images/backends 00:01:26.207 + nvme_files['nvme.img']=5G 00:01:26.207 + nvme_files['nvme-cmb.img']=5G 00:01:26.207 + nvme_files['nvme-multi0.img']=4G 00:01:26.207 + nvme_files['nvme-multi1.img']=4G 00:01:26.207 + nvme_files['nvme-multi2.img']=4G 00:01:26.207 + nvme_files['nvme-openstack.img']=8G 00:01:26.207 + nvme_files['nvme-zns.img']=5G 00:01:26.207 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:26.207 + (( SPDK_TEST_FTL == 1 )) 00:01:26.207 + nvme_files["nvme-ftl.img"]=6G 00:01:26.207 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:26.207 + nvme_files["nvme-fdp.img"]=1G 00:01:26.207 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:26.207 + for nvme in "${!nvme_files[@]}" 00:01:26.207 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:26.469 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.469 + for nvme in "${!nvme_files[@]}" 00:01:26.469 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:27.414 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:27.414 + for nvme in "${!nvme_files[@]}" 00:01:27.414 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:27.414 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:27.414 + for nvme in "${!nvme_files[@]}" 00:01:27.414 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:27.414 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:27.414 + for nvme in "${!nvme_files[@]}" 00:01:27.414 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:27.676 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:27.676 + for nvme in "${!nvme_files[@]}" 00:01:27.676 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:27.938 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:27.938 + for nvme in "${!nvme_files[@]}" 00:01:27.938 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:28.512 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.512 + for nvme in "${!nvme_files[@]}" 00:01:28.512 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:28.512 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:28.773 + for nvme in "${!nvme_files[@]}" 00:01:28.773 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:29.346 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.346 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:29.346 + echo 'End stage prepare_nvme.sh' 00:01:29.346 End stage prepare_nvme.sh 00:01:29.359 [Pipeline] sh 00:01:29.644 + DISTRO=fedora39 00:01:29.644 + CPUS=10 00:01:29.644 + RAM=12288 00:01:29.644 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:29.644 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:29.644 00:01:29.644 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:29.644 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:29.644 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:29.644 HELP=0 00:01:29.644 DRY_RUN=0 00:01:29.644 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:29.644 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:29.644 NVME_AUTO_CREATE=0 00:01:29.644 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:29.644 NVME_CMB=,,,, 00:01:29.644 NVME_PMR=,,,, 00:01:29.644 NVME_ZNS=,,,, 00:01:29.644 NVME_MS=true,,,, 00:01:29.644 NVME_FDP=,,,on, 00:01:29.644 SPDK_VAGRANT_DISTRO=fedora39 00:01:29.644 SPDK_VAGRANT_VMCPU=10 00:01:29.645 SPDK_VAGRANT_VMRAM=12288 00:01:29.645 SPDK_VAGRANT_PROVIDER=libvirt 00:01:29.645 SPDK_VAGRANT_HTTP_PROXY= 00:01:29.645 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:29.645 SPDK_OPENSTACK_NETWORK=0 00:01:29.645 VAGRANT_PACKAGE_BOX=0 00:01:29.645 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:29.645 FORCE_DISTRO=true 00:01:29.645 VAGRANT_BOX_VERSION= 00:01:29.645 EXTRA_VAGRANTFILES= 00:01:29.645 NIC_MODEL=e1000 00:01:29.645 00:01:29.645 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:29.645 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:32.209 Bringing machine 'default' up with 'libvirt' provider... 00:01:32.780 ==> default: Creating image (snapshot of base box volume). 00:01:32.780 ==> default: Creating domain with the following settings... 00:01:32.780 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731854297_b590cb8039ec0a1df8c1 00:01:32.780 ==> default: -- Domain type: kvm 00:01:32.780 ==> default: -- Cpus: 10 00:01:32.780 ==> default: -- Feature: acpi 00:01:32.780 ==> default: -- Feature: apic 00:01:32.780 ==> default: -- Feature: pae 00:01:32.780 ==> default: -- Memory: 12288M 00:01:32.780 ==> default: -- Memory Backing: hugepages: 00:01:32.780 ==> default: -- Management MAC: 00:01:32.780 ==> default: -- Loader: 00:01:32.780 ==> default: -- Nvram: 00:01:32.780 ==> default: -- Base box: spdk/fedora39 00:01:32.780 ==> default: -- Storage pool: default 00:01:32.780 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731854297_b590cb8039ec0a1df8c1.img (20G) 00:01:32.780 ==> default: -- Volume Cache: default 00:01:32.780 ==> default: -- Kernel: 00:01:32.780 ==> default: -- Initrd: 00:01:32.780 ==> default: -- Graphics Type: vnc 00:01:32.780 ==> default: -- Graphics Port: -1 00:01:32.780 ==> default: -- Graphics IP: 127.0.0.1 00:01:32.780 ==> default: -- Graphics Password: Not defined 00:01:32.780 ==> default: -- Video Type: cirrus 00:01:32.780 ==> default: -- Video VRAM: 9216 00:01:32.780 ==> default: -- Sound Type: 00:01:32.781 ==> default: -- Keymap: en-us 00:01:32.781 ==> default: -- TPM Path: 00:01:32.781 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:32.781 ==> default: -- Command line args: 00:01:32.781 ==> default: -> value=-device, 00:01:32.781 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:32.781 ==> default: -> value=-drive, 00:01:32.781 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:32.781 ==> default: -> value=-device, 00:01:32.781 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:32.781 ==> default: -> value=-device, 00:01:32.781 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:32.781 ==> default: -> value=-drive, 00:01:32.781 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:32.781 ==> default: -> value=-device, 00:01:32.781 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.781 ==> default: -> value=-device, 00:01:32.781 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:32.781 ==> default: -> value=-drive, 00:01:32.781 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:32.781 ==> default: -> value=-device, 00:01:32.781 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.781 ==> default: -> value=-drive, 00:01:32.781 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:32.781 ==> default: -> value=-device, 00:01:32.781 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.781 ==> default: -> value=-drive, 00:01:32.781 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:32.781 ==> default: -> value=-device, 00:01:32.781 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.781 ==> default: -> value=-device, 00:01:32.781 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:32.781 ==> default: -> value=-device, 00:01:32.781 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:32.781 ==> default: -> value=-drive, 00:01:32.781 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:32.781 ==> default: -> value=-device, 00:01:32.781 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.041 ==> default: Creating shared folders metadata... 00:01:33.041 ==> default: Starting domain. 00:01:34.951 ==> default: Waiting for domain to get an IP address... 00:01:49.900 ==> default: Waiting for SSH to become available... 00:01:51.817 ==> default: Configuring and enabling network interfaces... 00:01:56.025 default: SSH address: 192.168.121.154:22 00:01:56.025 default: SSH username: vagrant 00:01:56.025 default: SSH auth method: private key 00:01:57.400 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:03.959 ==> default: Mounting SSHFS shared folder... 00:02:04.525 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:04.525 ==> default: Checking Mount.. 00:02:05.901 ==> default: Folder Successfully Mounted! 00:02:05.901 00:02:05.901 SUCCESS! 00:02:05.901 00:02:05.901 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:05.901 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:05.901 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:05.901 00:02:05.909 [Pipeline] } 00:02:05.924 [Pipeline] // stage 00:02:05.933 [Pipeline] dir 00:02:05.933 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:05.935 [Pipeline] { 00:02:05.947 [Pipeline] catchError 00:02:05.949 [Pipeline] { 00:02:05.961 [Pipeline] sh 00:02:06.239 + vagrant ssh-config --host vagrant 00:02:06.239 + sed -ne '/^Host/,$p' 00:02:06.239 + tee ssh_conf 00:02:08.782 Host vagrant 00:02:08.782 HostName 192.168.121.154 00:02:08.782 User vagrant 00:02:08.782 Port 22 00:02:08.782 UserKnownHostsFile /dev/null 00:02:08.782 StrictHostKeyChecking no 00:02:08.782 PasswordAuthentication no 00:02:08.782 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:08.782 IdentitiesOnly yes 00:02:08.782 LogLevel FATAL 00:02:08.782 ForwardAgent yes 00:02:08.782 ForwardX11 yes 00:02:08.782 00:02:08.796 [Pipeline] withEnv 00:02:08.798 [Pipeline] { 00:02:08.812 [Pipeline] sh 00:02:09.089 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:09.089 source /etc/os-release 00:02:09.089 [[ -e /image.version ]] && img=$(< /image.version) 00:02:09.089 # Minimal, systemd-like check. 00:02:09.089 if [[ -e /.dockerenv ]]; then 00:02:09.089 # Clear garbage from the node'\''s name: 00:02:09.089 # agt-er_autotest_547-896 -> autotest_547-896 00:02:09.089 # $HOSTNAME is the actual container id 00:02:09.089 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:09.089 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:09.089 # We can assume this is a mount from a host where container is running, 00:02:09.089 # so fetch its hostname to easily identify the target swarm worker. 00:02:09.089 container="$(< /etc/hostname) ($agent)" 00:02:09.089 else 00:02:09.089 # Fallback 00:02:09.089 container=$agent 00:02:09.089 fi 00:02:09.089 fi 00:02:09.089 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:09.089 ' 00:02:09.099 [Pipeline] } 00:02:09.115 [Pipeline] // withEnv 00:02:09.123 [Pipeline] setCustomBuildProperty 00:02:09.137 [Pipeline] stage 00:02:09.139 [Pipeline] { (Tests) 00:02:09.155 [Pipeline] sh 00:02:09.433 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:09.705 [Pipeline] sh 00:02:09.982 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:10.255 [Pipeline] timeout 00:02:10.256 Timeout set to expire in 50 min 00:02:10.258 [Pipeline] { 00:02:10.272 [Pipeline] sh 00:02:10.552 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:10.811 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:02:10.823 [Pipeline] sh 00:02:11.101 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:11.371 [Pipeline] sh 00:02:11.649 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:11.923 [Pipeline] sh 00:02:12.229 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:12.229 ++ readlink -f spdk_repo 00:02:12.229 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:12.229 + [[ -n /home/vagrant/spdk_repo ]] 00:02:12.229 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:12.229 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:12.229 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:12.229 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:12.229 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:12.229 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:12.229 + cd /home/vagrant/spdk_repo 00:02:12.229 + source /etc/os-release 00:02:12.229 ++ NAME='Fedora Linux' 00:02:12.229 ++ VERSION='39 (Cloud Edition)' 00:02:12.229 ++ ID=fedora 00:02:12.229 ++ VERSION_ID=39 00:02:12.229 ++ VERSION_CODENAME= 00:02:12.229 ++ PLATFORM_ID=platform:f39 00:02:12.229 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:12.229 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:12.229 ++ LOGO=fedora-logo-icon 00:02:12.229 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:12.229 ++ HOME_URL=https://fedoraproject.org/ 00:02:12.229 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:12.229 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:12.229 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:12.229 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:12.229 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:12.229 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:12.229 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:12.229 ++ SUPPORT_END=2024-11-12 00:02:12.229 ++ VARIANT='Cloud Edition' 00:02:12.229 ++ VARIANT_ID=cloud 00:02:12.229 + uname -a 00:02:12.229 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:12.229 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:12.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:13.052 Hugepages 00:02:13.052 node hugesize free / total 00:02:13.052 node0 1048576kB 0 / 0 00:02:13.052 node0 2048kB 0 / 0 00:02:13.052 00:02:13.052 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:13.052 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:13.053 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:13.053 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:13.053 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:13.053 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:13.053 + rm -f /tmp/spdk-ld-path 00:02:13.053 + source autorun-spdk.conf 00:02:13.053 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.053 ++ SPDK_TEST_NVME=1 00:02:13.053 ++ SPDK_TEST_FTL=1 00:02:13.053 ++ SPDK_TEST_ISAL=1 00:02:13.053 ++ SPDK_RUN_ASAN=1 00:02:13.053 ++ SPDK_RUN_UBSAN=1 00:02:13.053 ++ SPDK_TEST_XNVME=1 00:02:13.053 ++ SPDK_TEST_NVME_FDP=1 00:02:13.053 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:13.053 ++ RUN_NIGHTLY=1 00:02:13.053 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:13.053 + [[ -n '' ]] 00:02:13.053 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:13.053 + for M in /var/spdk/build-*-manifest.txt 00:02:13.053 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:13.053 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.053 + for M in /var/spdk/build-*-manifest.txt 00:02:13.053 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:13.053 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.053 + for M in /var/spdk/build-*-manifest.txt 00:02:13.053 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:13.053 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.053 ++ uname 00:02:13.053 + [[ Linux == \L\i\n\u\x ]] 00:02:13.053 + sudo dmesg -T 00:02:13.053 + sudo dmesg --clear 00:02:13.053 + dmesg_pid=5030 00:02:13.053 + [[ Fedora Linux == FreeBSD ]] 00:02:13.053 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.053 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.053 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:13.053 + [[ -x /usr/src/fio-static/fio ]] 00:02:13.053 + sudo dmesg -Tw 00:02:13.053 + export FIO_BIN=/usr/src/fio-static/fio 00:02:13.053 + FIO_BIN=/usr/src/fio-static/fio 00:02:13.053 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:13.053 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:13.053 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:13.053 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.053 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.053 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:13.053 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.053 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.053 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:13.311 14:38:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:13.312 14:38:58 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:13.312 14:38:58 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.312 14:38:58 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:13.312 14:38:58 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:13.312 14:38:58 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:13.312 14:38:58 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:13.312 14:38:58 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:13.312 14:38:58 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:13.312 14:38:58 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:13.312 14:38:58 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:13.312 14:38:58 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:13.312 14:38:58 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:13.312 14:38:58 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:13.312 14:38:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:13.312 14:38:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:13.312 14:38:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:13.312 14:38:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:13.312 14:38:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:13.312 14:38:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:13.312 14:38:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.312 14:38:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.312 14:38:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.312 14:38:58 -- paths/export.sh@5 -- $ export PATH 00:02:13.312 14:38:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.312 14:38:58 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:13.312 14:38:58 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:13.312 14:38:58 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731854338.XXXXXX 00:02:13.312 14:38:58 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731854338.8WY9TN 00:02:13.312 14:38:58 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:13.312 14:38:58 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:13.312 14:38:58 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:13.312 14:38:58 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:13.312 14:38:58 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:13.312 14:38:58 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:13.312 14:38:58 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:13.312 14:38:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.312 14:38:58 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:13.312 14:38:58 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:13.312 14:38:58 -- pm/common@17 -- $ local monitor 00:02:13.312 14:38:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.312 14:38:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.312 14:38:58 -- pm/common@25 -- $ sleep 1 00:02:13.312 14:38:58 -- pm/common@21 -- $ date +%s 00:02:13.312 14:38:58 -- pm/common@21 -- $ date +%s 00:02:13.312 14:38:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731854338 00:02:13.312 14:38:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731854338 00:02:13.312 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731854338_collect-cpu-load.pm.log 00:02:13.312 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731854338_collect-vmstat.pm.log 00:02:14.247 14:38:59 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:14.247 14:38:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:14.247 14:38:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:14.247 14:38:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:14.247 14:38:59 -- spdk/autobuild.sh@16 -- $ date -u 00:02:14.247 Sun Nov 17 02:38:59 PM UTC 2024 00:02:14.247 14:38:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:14.247 v25.01-pre-189-g83e8405e4 00:02:14.247 14:38:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:14.247 14:38:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:14.247 14:38:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:14.247 14:38:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:14.247 14:38:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.247 ************************************ 00:02:14.247 START TEST asan 00:02:14.247 ************************************ 00:02:14.247 using asan 00:02:14.247 14:38:59 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:14.247 00:02:14.247 real 0m0.000s 00:02:14.247 user 0m0.000s 00:02:14.247 sys 0m0.000s 00:02:14.247 14:38:59 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:14.247 ************************************ 00:02:14.247 END TEST asan 00:02:14.247 ************************************ 00:02:14.247 14:38:59 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:14.247 14:38:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:14.247 14:38:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:14.247 14:38:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:14.247 14:38:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:14.247 14:38:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.247 ************************************ 00:02:14.247 START TEST ubsan 00:02:14.247 ************************************ 00:02:14.247 using ubsan 00:02:14.247 14:38:59 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:14.247 00:02:14.247 real 0m0.000s 00:02:14.247 user 0m0.000s 00:02:14.247 sys 0m0.000s 00:02:14.247 14:38:59 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:14.247 ************************************ 00:02:14.247 END TEST ubsan 00:02:14.247 14:38:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:14.247 ************************************ 00:02:14.506 14:38:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:14.506 14:38:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:14.506 14:38:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:14.506 14:38:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:14.506 14:38:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:14.506 14:38:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:14.506 14:38:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:14.506 14:38:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:14.506 14:38:59 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:14.506 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:14.506 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:14.765 Using 'verbs' RDMA provider 00:02:25.687 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:35.695 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:35.954 Creating mk/config.mk...done. 00:02:35.954 Creating mk/cc.flags.mk...done. 00:02:35.954 Type 'make' to build. 00:02:35.954 14:39:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:35.954 14:39:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:35.954 14:39:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:35.954 14:39:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.954 ************************************ 00:02:35.954 START TEST make 00:02:35.954 ************************************ 00:02:35.954 14:39:21 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:36.215 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:36.215 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:36.215 meson setup builddir \ 00:02:36.215 -Dwith-libaio=enabled \ 00:02:36.215 -Dwith-liburing=enabled \ 00:02:36.215 -Dwith-libvfn=disabled \ 00:02:36.215 -Dwith-spdk=disabled \ 00:02:36.215 -Dexamples=false \ 00:02:36.215 -Dtests=false \ 00:02:36.215 -Dtools=false && \ 00:02:36.215 meson compile -C builddir && \ 00:02:36.215 cd -) 00:02:36.215 make[1]: Nothing to be done for 'all'. 00:02:38.116 The Meson build system 00:02:38.116 Version: 1.5.0 00:02:38.116 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:38.116 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:38.116 Build type: native build 00:02:38.116 Project name: xnvme 00:02:38.116 Project version: 0.7.5 00:02:38.116 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:38.116 C linker for the host machine: cc ld.bfd 2.40-14 00:02:38.116 Host machine cpu family: x86_64 00:02:38.116 Host machine cpu: x86_64 00:02:38.116 Message: host_machine.system: linux 00:02:38.116 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:38.116 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:38.116 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:38.116 Run-time dependency threads found: YES 00:02:38.116 Has header "setupapi.h" : NO 00:02:38.116 Has header "linux/blkzoned.h" : YES 00:02:38.116 Has header "linux/blkzoned.h" : YES (cached) 00:02:38.116 Has header "libaio.h" : YES 00:02:38.116 Library aio found: YES 00:02:38.116 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:38.116 Run-time dependency liburing found: YES 2.2 00:02:38.116 Dependency libvfn skipped: feature with-libvfn disabled 00:02:38.116 Found CMake: /usr/bin/cmake (3.27.7) 00:02:38.116 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:38.116 Subproject spdk : skipped: feature with-spdk disabled 00:02:38.116 Run-time dependency appleframeworks found: NO (tried framework) 00:02:38.116 Run-time dependency appleframeworks found: NO (tried framework) 00:02:38.116 Library rt found: YES 00:02:38.116 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:38.116 Configuring xnvme_config.h using configuration 00:02:38.116 Configuring xnvme.spec using configuration 00:02:38.116 Run-time dependency bash-completion found: YES 2.11 00:02:38.116 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:38.116 Program cp found: YES (/usr/bin/cp) 00:02:38.116 Build targets in project: 3 00:02:38.116 00:02:38.116 xnvme 0.7.5 00:02:38.116 00:02:38.116 Subprojects 00:02:38.116 spdk : NO Feature 'with-spdk' disabled 00:02:38.116 00:02:38.116 User defined options 00:02:38.116 examples : false 00:02:38.116 tests : false 00:02:38.116 tools : false 00:02:38.116 with-libaio : enabled 00:02:38.116 with-liburing: enabled 00:02:38.116 with-libvfn : disabled 00:02:38.116 with-spdk : disabled 00:02:38.116 00:02:38.116 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.374 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:38.678 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:38.678 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:38.678 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:38.678 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:38.678 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:38.678 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:38.678 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:38.678 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:38.678 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:38.678 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:38.678 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:38.678 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:38.678 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:38.678 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:38.678 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:38.679 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:38.679 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:38.679 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:38.679 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:38.959 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:38.959 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:38.959 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:38.959 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:38.959 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:38.959 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:38.959 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:38.959 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:38.959 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:38.959 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:38.959 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:38.959 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:38.959 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:38.959 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:38.959 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:38.959 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:38.959 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:38.959 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:38.959 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:38.959 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:38.959 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:38.959 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:38.959 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:38.959 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:38.959 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:38.959 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:38.959 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:38.959 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:38.959 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:38.959 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:38.959 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:38.959 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:38.959 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:38.959 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:38.959 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:38.959 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:38.959 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:38.960 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:38.960 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:39.221 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:39.221 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:39.221 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:39.221 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:39.221 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:39.222 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:39.222 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:39.222 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:39.222 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:39.222 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:39.222 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:39.222 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:39.222 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:39.222 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:39.222 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:39.789 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:39.789 [75/76] Linking static target lib/libxnvme.a 00:02:39.789 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:39.789 INFO: autodetecting backend as ninja 00:02:39.789 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:39.789 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:46.348 The Meson build system 00:02:46.348 Version: 1.5.0 00:02:46.348 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:46.348 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:46.348 Build type: native build 00:02:46.348 Program cat found: YES (/usr/bin/cat) 00:02:46.348 Project name: DPDK 00:02:46.348 Project version: 24.03.0 00:02:46.348 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:46.348 C linker for the host machine: cc ld.bfd 2.40-14 00:02:46.348 Host machine cpu family: x86_64 00:02:46.348 Host machine cpu: x86_64 00:02:46.348 Message: ## Building in Developer Mode ## 00:02:46.348 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:46.348 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:46.348 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:46.348 Program python3 found: YES (/usr/bin/python3) 00:02:46.348 Program cat found: YES (/usr/bin/cat) 00:02:46.348 Compiler for C supports arguments -march=native: YES 00:02:46.348 Checking for size of "void *" : 8 00:02:46.348 Checking for size of "void *" : 8 (cached) 00:02:46.348 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:46.348 Library m found: YES 00:02:46.348 Library numa found: YES 00:02:46.348 Has header "numaif.h" : YES 00:02:46.348 Library fdt found: NO 00:02:46.348 Library execinfo found: NO 00:02:46.348 Has header "execinfo.h" : YES 00:02:46.348 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:46.348 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:46.348 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:46.348 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:46.348 Run-time dependency openssl found: YES 3.1.1 00:02:46.348 Run-time dependency libpcap found: YES 1.10.4 00:02:46.348 Has header "pcap.h" with dependency libpcap: YES 00:02:46.348 Compiler for C supports arguments -Wcast-qual: YES 00:02:46.348 Compiler for C supports arguments -Wdeprecated: YES 00:02:46.348 Compiler for C supports arguments -Wformat: YES 00:02:46.348 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:46.348 Compiler for C supports arguments -Wformat-security: NO 00:02:46.348 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:46.348 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:46.348 Compiler for C supports arguments -Wnested-externs: YES 00:02:46.348 Compiler for C supports arguments -Wold-style-definition: YES 00:02:46.348 Compiler for C supports arguments -Wpointer-arith: YES 00:02:46.348 Compiler for C supports arguments -Wsign-compare: YES 00:02:46.348 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:46.348 Compiler for C supports arguments -Wundef: YES 00:02:46.348 Compiler for C supports arguments -Wwrite-strings: YES 00:02:46.348 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:46.348 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:46.348 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:46.348 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:46.348 Program objdump found: YES (/usr/bin/objdump) 00:02:46.348 Compiler for C supports arguments -mavx512f: YES 00:02:46.348 Checking if "AVX512 checking" compiles: YES 00:02:46.348 Fetching value of define "__SSE4_2__" : 1 00:02:46.348 Fetching value of define "__AES__" : 1 00:02:46.348 Fetching value of define "__AVX__" : 1 00:02:46.348 Fetching value of define "__AVX2__" : 1 00:02:46.348 Fetching value of define "__AVX512BW__" : 1 00:02:46.348 Fetching value of define "__AVX512CD__" : 1 00:02:46.348 Fetching value of define "__AVX512DQ__" : 1 00:02:46.348 Fetching value of define "__AVX512F__" : 1 00:02:46.349 Fetching value of define "__AVX512VL__" : 1 00:02:46.349 Fetching value of define "__PCLMUL__" : 1 00:02:46.349 Fetching value of define "__RDRND__" : 1 00:02:46.349 Fetching value of define "__RDSEED__" : 1 00:02:46.349 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:46.349 Fetching value of define "__znver1__" : (undefined) 00:02:46.349 Fetching value of define "__znver2__" : (undefined) 00:02:46.349 Fetching value of define "__znver3__" : (undefined) 00:02:46.349 Fetching value of define "__znver4__" : (undefined) 00:02:46.349 Library asan found: YES 00:02:46.349 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:46.349 Message: lib/log: Defining dependency "log" 00:02:46.349 Message: lib/kvargs: Defining dependency "kvargs" 00:02:46.349 Message: lib/telemetry: Defining dependency "telemetry" 00:02:46.349 Library rt found: YES 00:02:46.349 Checking for function "getentropy" : NO 00:02:46.349 Message: lib/eal: Defining dependency "eal" 00:02:46.349 Message: lib/ring: Defining dependency "ring" 00:02:46.349 Message: lib/rcu: Defining dependency "rcu" 00:02:46.349 Message: lib/mempool: Defining dependency "mempool" 00:02:46.349 Message: lib/mbuf: Defining dependency "mbuf" 00:02:46.349 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:46.349 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:46.349 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:46.349 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:46.349 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:46.349 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:46.349 Compiler for C supports arguments -mpclmul: YES 00:02:46.349 Compiler for C supports arguments -maes: YES 00:02:46.349 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:46.349 Compiler for C supports arguments -mavx512bw: YES 00:02:46.349 Compiler for C supports arguments -mavx512dq: YES 00:02:46.349 Compiler for C supports arguments -mavx512vl: YES 00:02:46.349 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:46.349 Compiler for C supports arguments -mavx2: YES 00:02:46.349 Compiler for C supports arguments -mavx: YES 00:02:46.349 Message: lib/net: Defining dependency "net" 00:02:46.349 Message: lib/meter: Defining dependency "meter" 00:02:46.349 Message: lib/ethdev: Defining dependency "ethdev" 00:02:46.349 Message: lib/pci: Defining dependency "pci" 00:02:46.349 Message: lib/cmdline: Defining dependency "cmdline" 00:02:46.349 Message: lib/hash: Defining dependency "hash" 00:02:46.349 Message: lib/timer: Defining dependency "timer" 00:02:46.349 Message: lib/compressdev: Defining dependency "compressdev" 00:02:46.349 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:46.349 Message: lib/dmadev: Defining dependency "dmadev" 00:02:46.349 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:46.349 Message: lib/power: Defining dependency "power" 00:02:46.349 Message: lib/reorder: Defining dependency "reorder" 00:02:46.349 Message: lib/security: Defining dependency "security" 00:02:46.349 Has header "linux/userfaultfd.h" : YES 00:02:46.349 Has header "linux/vduse.h" : YES 00:02:46.349 Message: lib/vhost: Defining dependency "vhost" 00:02:46.349 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:46.349 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:46.349 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:46.349 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:46.349 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:46.349 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:46.349 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:46.349 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:46.349 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:46.349 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:46.349 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:46.349 Configuring doxy-api-html.conf using configuration 00:02:46.349 Configuring doxy-api-man.conf using configuration 00:02:46.349 Program mandb found: YES (/usr/bin/mandb) 00:02:46.349 Program sphinx-build found: NO 00:02:46.349 Configuring rte_build_config.h using configuration 00:02:46.349 Message: 00:02:46.349 ================= 00:02:46.349 Applications Enabled 00:02:46.349 ================= 00:02:46.349 00:02:46.349 apps: 00:02:46.349 00:02:46.349 00:02:46.349 Message: 00:02:46.349 ================= 00:02:46.349 Libraries Enabled 00:02:46.349 ================= 00:02:46.349 00:02:46.349 libs: 00:02:46.349 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:46.349 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:46.349 cryptodev, dmadev, power, reorder, security, vhost, 00:02:46.349 00:02:46.349 Message: 00:02:46.349 =============== 00:02:46.349 Drivers Enabled 00:02:46.349 =============== 00:02:46.349 00:02:46.349 common: 00:02:46.349 00:02:46.349 bus: 00:02:46.349 pci, vdev, 00:02:46.349 mempool: 00:02:46.349 ring, 00:02:46.349 dma: 00:02:46.349 00:02:46.349 net: 00:02:46.349 00:02:46.349 crypto: 00:02:46.349 00:02:46.349 compress: 00:02:46.349 00:02:46.349 vdpa: 00:02:46.349 00:02:46.349 00:02:46.349 Message: 00:02:46.349 ================= 00:02:46.349 Content Skipped 00:02:46.349 ================= 00:02:46.349 00:02:46.349 apps: 00:02:46.349 dumpcap: explicitly disabled via build config 00:02:46.349 graph: explicitly disabled via build config 00:02:46.349 pdump: explicitly disabled via build config 00:02:46.349 proc-info: explicitly disabled via build config 00:02:46.349 test-acl: explicitly disabled via build config 00:02:46.349 test-bbdev: explicitly disabled via build config 00:02:46.349 test-cmdline: explicitly disabled via build config 00:02:46.349 test-compress-perf: explicitly disabled via build config 00:02:46.349 test-crypto-perf: explicitly disabled via build config 00:02:46.349 test-dma-perf: explicitly disabled via build config 00:02:46.349 test-eventdev: explicitly disabled via build config 00:02:46.349 test-fib: explicitly disabled via build config 00:02:46.349 test-flow-perf: explicitly disabled via build config 00:02:46.349 test-gpudev: explicitly disabled via build config 00:02:46.349 test-mldev: explicitly disabled via build config 00:02:46.349 test-pipeline: explicitly disabled via build config 00:02:46.349 test-pmd: explicitly disabled via build config 00:02:46.349 test-regex: explicitly disabled via build config 00:02:46.349 test-sad: explicitly disabled via build config 00:02:46.349 test-security-perf: explicitly disabled via build config 00:02:46.349 00:02:46.349 libs: 00:02:46.349 argparse: explicitly disabled via build config 00:02:46.349 metrics: explicitly disabled via build config 00:02:46.349 acl: explicitly disabled via build config 00:02:46.349 bbdev: explicitly disabled via build config 00:02:46.349 bitratestats: explicitly disabled via build config 00:02:46.349 bpf: explicitly disabled via build config 00:02:46.349 cfgfile: explicitly disabled via build config 00:02:46.349 distributor: explicitly disabled via build config 00:02:46.349 efd: explicitly disabled via build config 00:02:46.349 eventdev: explicitly disabled via build config 00:02:46.349 dispatcher: explicitly disabled via build config 00:02:46.349 gpudev: explicitly disabled via build config 00:02:46.349 gro: explicitly disabled via build config 00:02:46.349 gso: explicitly disabled via build config 00:02:46.349 ip_frag: explicitly disabled via build config 00:02:46.349 jobstats: explicitly disabled via build config 00:02:46.349 latencystats: explicitly disabled via build config 00:02:46.349 lpm: explicitly disabled via build config 00:02:46.349 member: explicitly disabled via build config 00:02:46.349 pcapng: explicitly disabled via build config 00:02:46.349 rawdev: explicitly disabled via build config 00:02:46.349 regexdev: explicitly disabled via build config 00:02:46.349 mldev: explicitly disabled via build config 00:02:46.349 rib: explicitly disabled via build config 00:02:46.349 sched: explicitly disabled via build config 00:02:46.349 stack: explicitly disabled via build config 00:02:46.349 ipsec: explicitly disabled via build config 00:02:46.349 pdcp: explicitly disabled via build config 00:02:46.349 fib: explicitly disabled via build config 00:02:46.349 port: explicitly disabled via build config 00:02:46.349 pdump: explicitly disabled via build config 00:02:46.349 table: explicitly disabled via build config 00:02:46.349 pipeline: explicitly disabled via build config 00:02:46.349 graph: explicitly disabled via build config 00:02:46.349 node: explicitly disabled via build config 00:02:46.349 00:02:46.349 drivers: 00:02:46.349 common/cpt: not in enabled drivers build config 00:02:46.349 common/dpaax: not in enabled drivers build config 00:02:46.349 common/iavf: not in enabled drivers build config 00:02:46.349 common/idpf: not in enabled drivers build config 00:02:46.349 common/ionic: not in enabled drivers build config 00:02:46.349 common/mvep: not in enabled drivers build config 00:02:46.349 common/octeontx: not in enabled drivers build config 00:02:46.349 bus/auxiliary: not in enabled drivers build config 00:02:46.349 bus/cdx: not in enabled drivers build config 00:02:46.349 bus/dpaa: not in enabled drivers build config 00:02:46.349 bus/fslmc: not in enabled drivers build config 00:02:46.349 bus/ifpga: not in enabled drivers build config 00:02:46.349 bus/platform: not in enabled drivers build config 00:02:46.349 bus/uacce: not in enabled drivers build config 00:02:46.349 bus/vmbus: not in enabled drivers build config 00:02:46.349 common/cnxk: not in enabled drivers build config 00:02:46.349 common/mlx5: not in enabled drivers build config 00:02:46.349 common/nfp: not in enabled drivers build config 00:02:46.349 common/nitrox: not in enabled drivers build config 00:02:46.349 common/qat: not in enabled drivers build config 00:02:46.349 common/sfc_efx: not in enabled drivers build config 00:02:46.349 mempool/bucket: not in enabled drivers build config 00:02:46.350 mempool/cnxk: not in enabled drivers build config 00:02:46.350 mempool/dpaa: not in enabled drivers build config 00:02:46.350 mempool/dpaa2: not in enabled drivers build config 00:02:46.350 mempool/octeontx: not in enabled drivers build config 00:02:46.350 mempool/stack: not in enabled drivers build config 00:02:46.350 dma/cnxk: not in enabled drivers build config 00:02:46.350 dma/dpaa: not in enabled drivers build config 00:02:46.350 dma/dpaa2: not in enabled drivers build config 00:02:46.350 dma/hisilicon: not in enabled drivers build config 00:02:46.350 dma/idxd: not in enabled drivers build config 00:02:46.350 dma/ioat: not in enabled drivers build config 00:02:46.350 dma/skeleton: not in enabled drivers build config 00:02:46.350 net/af_packet: not in enabled drivers build config 00:02:46.350 net/af_xdp: not in enabled drivers build config 00:02:46.350 net/ark: not in enabled drivers build config 00:02:46.350 net/atlantic: not in enabled drivers build config 00:02:46.350 net/avp: not in enabled drivers build config 00:02:46.350 net/axgbe: not in enabled drivers build config 00:02:46.350 net/bnx2x: not in enabled drivers build config 00:02:46.350 net/bnxt: not in enabled drivers build config 00:02:46.350 net/bonding: not in enabled drivers build config 00:02:46.350 net/cnxk: not in enabled drivers build config 00:02:46.350 net/cpfl: not in enabled drivers build config 00:02:46.350 net/cxgbe: not in enabled drivers build config 00:02:46.350 net/dpaa: not in enabled drivers build config 00:02:46.350 net/dpaa2: not in enabled drivers build config 00:02:46.350 net/e1000: not in enabled drivers build config 00:02:46.350 net/ena: not in enabled drivers build config 00:02:46.350 net/enetc: not in enabled drivers build config 00:02:46.350 net/enetfec: not in enabled drivers build config 00:02:46.350 net/enic: not in enabled drivers build config 00:02:46.350 net/failsafe: not in enabled drivers build config 00:02:46.350 net/fm10k: not in enabled drivers build config 00:02:46.350 net/gve: not in enabled drivers build config 00:02:46.350 net/hinic: not in enabled drivers build config 00:02:46.350 net/hns3: not in enabled drivers build config 00:02:46.350 net/i40e: not in enabled drivers build config 00:02:46.350 net/iavf: not in enabled drivers build config 00:02:46.350 net/ice: not in enabled drivers build config 00:02:46.350 net/idpf: not in enabled drivers build config 00:02:46.350 net/igc: not in enabled drivers build config 00:02:46.350 net/ionic: not in enabled drivers build config 00:02:46.350 net/ipn3ke: not in enabled drivers build config 00:02:46.350 net/ixgbe: not in enabled drivers build config 00:02:46.350 net/mana: not in enabled drivers build config 00:02:46.350 net/memif: not in enabled drivers build config 00:02:46.350 net/mlx4: not in enabled drivers build config 00:02:46.350 net/mlx5: not in enabled drivers build config 00:02:46.350 net/mvneta: not in enabled drivers build config 00:02:46.350 net/mvpp2: not in enabled drivers build config 00:02:46.350 net/netvsc: not in enabled drivers build config 00:02:46.350 net/nfb: not in enabled drivers build config 00:02:46.350 net/nfp: not in enabled drivers build config 00:02:46.350 net/ngbe: not in enabled drivers build config 00:02:46.350 net/null: not in enabled drivers build config 00:02:46.350 net/octeontx: not in enabled drivers build config 00:02:46.350 net/octeon_ep: not in enabled drivers build config 00:02:46.350 net/pcap: not in enabled drivers build config 00:02:46.350 net/pfe: not in enabled drivers build config 00:02:46.350 net/qede: not in enabled drivers build config 00:02:46.350 net/ring: not in enabled drivers build config 00:02:46.350 net/sfc: not in enabled drivers build config 00:02:46.350 net/softnic: not in enabled drivers build config 00:02:46.350 net/tap: not in enabled drivers build config 00:02:46.350 net/thunderx: not in enabled drivers build config 00:02:46.350 net/txgbe: not in enabled drivers build config 00:02:46.350 net/vdev_netvsc: not in enabled drivers build config 00:02:46.350 net/vhost: not in enabled drivers build config 00:02:46.350 net/virtio: not in enabled drivers build config 00:02:46.350 net/vmxnet3: not in enabled drivers build config 00:02:46.350 raw/*: missing internal dependency, "rawdev" 00:02:46.350 crypto/armv8: not in enabled drivers build config 00:02:46.350 crypto/bcmfs: not in enabled drivers build config 00:02:46.350 crypto/caam_jr: not in enabled drivers build config 00:02:46.350 crypto/ccp: not in enabled drivers build config 00:02:46.350 crypto/cnxk: not in enabled drivers build config 00:02:46.350 crypto/dpaa_sec: not in enabled drivers build config 00:02:46.350 crypto/dpaa2_sec: not in enabled drivers build config 00:02:46.350 crypto/ipsec_mb: not in enabled drivers build config 00:02:46.350 crypto/mlx5: not in enabled drivers build config 00:02:46.350 crypto/mvsam: not in enabled drivers build config 00:02:46.350 crypto/nitrox: not in enabled drivers build config 00:02:46.350 crypto/null: not in enabled drivers build config 00:02:46.350 crypto/octeontx: not in enabled drivers build config 00:02:46.350 crypto/openssl: not in enabled drivers build config 00:02:46.350 crypto/scheduler: not in enabled drivers build config 00:02:46.350 crypto/uadk: not in enabled drivers build config 00:02:46.350 crypto/virtio: not in enabled drivers build config 00:02:46.350 compress/isal: not in enabled drivers build config 00:02:46.350 compress/mlx5: not in enabled drivers build config 00:02:46.350 compress/nitrox: not in enabled drivers build config 00:02:46.350 compress/octeontx: not in enabled drivers build config 00:02:46.350 compress/zlib: not in enabled drivers build config 00:02:46.350 regex/*: missing internal dependency, "regexdev" 00:02:46.350 ml/*: missing internal dependency, "mldev" 00:02:46.350 vdpa/ifc: not in enabled drivers build config 00:02:46.350 vdpa/mlx5: not in enabled drivers build config 00:02:46.350 vdpa/nfp: not in enabled drivers build config 00:02:46.350 vdpa/sfc: not in enabled drivers build config 00:02:46.350 event/*: missing internal dependency, "eventdev" 00:02:46.350 baseband/*: missing internal dependency, "bbdev" 00:02:46.350 gpu/*: missing internal dependency, "gpudev" 00:02:46.350 00:02:46.350 00:02:46.350 Build targets in project: 84 00:02:46.350 00:02:46.350 DPDK 24.03.0 00:02:46.350 00:02:46.350 User defined options 00:02:46.350 buildtype : debug 00:02:46.350 default_library : shared 00:02:46.350 libdir : lib 00:02:46.350 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:46.350 b_sanitize : address 00:02:46.350 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:46.350 c_link_args : 00:02:46.350 cpu_instruction_set: native 00:02:46.350 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:46.350 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:46.350 enable_docs : false 00:02:46.350 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:46.350 enable_kmods : false 00:02:46.350 max_lcores : 128 00:02:46.350 tests : false 00:02:46.350 00:02:46.350 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:46.350 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:46.350 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:46.350 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:46.350 [3/267] Linking static target lib/librte_kvargs.a 00:02:46.350 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:46.350 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:46.350 [6/267] Linking static target lib/librte_log.a 00:02:46.350 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:46.350 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:46.350 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:46.350 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:46.350 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:46.350 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:46.609 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:46.609 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.609 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:46.609 [16/267] Linking static target lib/librte_telemetry.a 00:02:46.609 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:46.609 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:46.609 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:46.867 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:46.867 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:46.867 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:46.867 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:46.867 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:46.867 [25/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.867 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:46.867 [27/267] Linking target lib/librte_log.so.24.1 00:02:47.125 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:47.125 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:47.125 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:47.125 [31/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.125 [32/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:47.125 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:47.125 [34/267] Linking target lib/librte_kvargs.so.24.1 00:02:47.125 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:47.125 [36/267] Linking target lib/librte_telemetry.so.24.1 00:02:47.125 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:47.125 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:47.383 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:47.383 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:47.383 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:47.383 [42/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:47.383 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:47.383 [44/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:47.383 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:47.641 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.641 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.641 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.641 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:47.641 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:47.641 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.641 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:47.899 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:47.899 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:47.899 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:47.899 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:47.899 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:47.899 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:47.899 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:48.157 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:48.157 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.157 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:48.157 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:48.157 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:48.157 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.157 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.415 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.415 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.415 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:48.415 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:48.415 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:48.415 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.673 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.673 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:48.673 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:48.673 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.673 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:48.673 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:48.673 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:48.673 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:48.673 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:48.673 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:48.931 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:48.931 [84/267] Linking static target lib/librte_ring.a 00:02:48.931 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.188 [86/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:49.188 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:49.188 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.188 [89/267] Linking static target lib/librte_eal.a 00:02:49.188 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.188 [91/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.188 [92/267] Linking static target lib/librte_rcu.a 00:02:49.188 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.188 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.188 [95/267] Linking static target lib/librte_mempool.a 00:02:49.446 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.446 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.446 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.704 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:49.704 [100/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.704 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:49.704 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.704 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.704 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:49.962 [105/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.962 [106/267] Linking static target lib/librte_net.a 00:02:49.962 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:49.962 [108/267] Linking static target lib/librte_meter.a 00:02:49.962 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:49.962 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:49.962 [111/267] Linking static target lib/librte_mbuf.a 00:02:49.962 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:50.287 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.287 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:50.287 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.287 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.287 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.287 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:50.544 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:50.544 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:50.544 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:50.802 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:50.802 [123/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.802 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:50.802 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:50.802 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:50.802 [127/267] Linking static target lib/librte_pci.a 00:02:50.802 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:50.802 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.060 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:51.060 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:51.060 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.060 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.060 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:51.060 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:51.060 [136/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.060 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:51.060 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:51.060 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:51.060 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.060 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.060 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:51.060 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:51.319 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.319 [145/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:51.319 [146/267] Linking static target lib/librte_cmdline.a 00:02:51.319 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:51.319 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.576 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:51.576 [150/267] Linking static target lib/librte_timer.a 00:02:51.576 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:51.576 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:51.576 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:51.834 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:51.834 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:51.834 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:51.834 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:51.834 [158/267] Linking static target lib/librte_compressdev.a 00:02:51.834 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.834 [160/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.092 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.092 [162/267] Linking static target lib/librte_hash.a 00:02:52.092 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:52.092 [164/267] Linking static target lib/librte_ethdev.a 00:02:52.092 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:52.092 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.092 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.092 [168/267] Linking static target lib/librte_dmadev.a 00:02:52.092 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.350 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.350 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.350 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.608 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.608 [174/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.608 [175/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.608 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:52.608 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:52.865 [178/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:52.865 [179/267] Linking static target lib/librte_cryptodev.a 00:02:52.865 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:52.865 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:52.865 [182/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.865 [183/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.865 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:52.865 [185/267] Linking static target lib/librte_power.a 00:02:53.123 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.123 [187/267] Linking static target lib/librte_reorder.a 00:02:53.123 [188/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.123 [189/267] Linking static target lib/librte_security.a 00:02:53.123 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.123 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.123 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.381 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.381 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:53.638 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.895 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:53.895 [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.895 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:53.895 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:53.895 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:54.152 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:54.152 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.152 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:54.152 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:54.409 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:54.409 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:54.409 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:54.409 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:54.409 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:54.666 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:54.666 [211/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.666 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:54.666 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.666 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.666 [215/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.666 [216/267] Linking static target drivers/librte_bus_vdev.a 00:02:54.666 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.666 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:54.666 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:54.666 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:54.666 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:54.925 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.925 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.925 [224/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.925 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:54.925 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.490 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:56.424 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.424 [229/267] Linking target lib/librte_eal.so.24.1 00:02:56.424 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:56.424 [231/267] Linking target lib/librte_pci.so.24.1 00:02:56.424 [232/267] Linking target lib/librte_timer.so.24.1 00:02:56.424 [233/267] Linking target lib/librte_ring.so.24.1 00:02:56.424 [234/267] Linking target lib/librte_dmadev.so.24.1 00:02:56.424 [235/267] Linking target lib/librte_meter.so.24.1 00:02:56.424 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:56.424 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:56.682 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:56.682 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:56.682 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:56.682 [241/267] Linking target lib/librte_mempool.so.24.1 00:02:56.682 [242/267] Linking target lib/librte_rcu.so.24.1 00:02:56.682 [243/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:56.682 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:56.682 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:56.682 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:56.682 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:56.682 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:56.940 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:56.940 [250/267] Linking target lib/librte_net.so.24.1 00:02:56.940 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:56.940 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:56.940 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:56.940 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:56.940 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:56.940 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:56.940 [257/267] Linking target lib/librte_hash.so.24.1 00:02:56.940 [258/267] Linking target lib/librte_security.so.24.1 00:02:57.198 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:57.198 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.456 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:57.456 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:57.456 [263/267] Linking target lib/librte_power.so.24.1 00:02:57.714 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.972 [265/267] Linking static target lib/librte_vhost.a 00:02:58.905 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.905 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:58.905 INFO: autodetecting backend as ninja 00:02:58.905 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:13.817 CC lib/ut/ut.o 00:03:13.817 CC lib/ut_mock/mock.o 00:03:13.817 CC lib/log/log_flags.o 00:03:13.817 CC lib/log/log.o 00:03:13.817 CC lib/log/log_deprecated.o 00:03:13.817 LIB libspdk_ut.a 00:03:13.817 LIB libspdk_ut_mock.a 00:03:13.817 LIB libspdk_log.a 00:03:13.817 SO libspdk_ut.so.2.0 00:03:13.817 SO libspdk_ut_mock.so.6.0 00:03:13.817 SO libspdk_log.so.7.1 00:03:13.817 SYMLINK libspdk_ut.so 00:03:13.817 SYMLINK libspdk_ut_mock.so 00:03:13.817 SYMLINK libspdk_log.so 00:03:13.817 CC lib/util/base64.o 00:03:13.817 CC lib/util/crc16.o 00:03:13.817 CC lib/util/bit_array.o 00:03:13.817 CC lib/util/cpuset.o 00:03:13.817 CC lib/util/crc32.o 00:03:13.817 CC lib/util/crc32c.o 00:03:13.817 CC lib/dma/dma.o 00:03:13.817 CXX lib/trace_parser/trace.o 00:03:13.817 CC lib/ioat/ioat.o 00:03:13.817 CC lib/vfio_user/host/vfio_user_pci.o 00:03:13.817 CC lib/util/crc32_ieee.o 00:03:13.817 CC lib/util/crc64.o 00:03:13.817 CC lib/vfio_user/host/vfio_user.o 00:03:13.817 CC lib/util/dif.o 00:03:13.817 CC lib/util/fd.o 00:03:13.817 LIB libspdk_dma.a 00:03:13.817 CC lib/util/fd_group.o 00:03:13.817 CC lib/util/file.o 00:03:13.817 SO libspdk_dma.so.5.0 00:03:13.817 CC lib/util/hexlify.o 00:03:13.817 SYMLINK libspdk_dma.so 00:03:13.817 CC lib/util/iov.o 00:03:13.817 LIB libspdk_ioat.a 00:03:13.817 CC lib/util/math.o 00:03:13.817 CC lib/util/net.o 00:03:13.817 SO libspdk_ioat.so.7.0 00:03:13.817 LIB libspdk_vfio_user.a 00:03:13.817 CC lib/util/pipe.o 00:03:13.817 SYMLINK libspdk_ioat.so 00:03:13.817 CC lib/util/strerror_tls.o 00:03:13.817 SO libspdk_vfio_user.so.5.0 00:03:13.817 CC lib/util/string.o 00:03:13.817 SYMLINK libspdk_vfio_user.so 00:03:13.817 CC lib/util/uuid.o 00:03:13.817 CC lib/util/xor.o 00:03:13.817 CC lib/util/zipf.o 00:03:13.817 CC lib/util/md5.o 00:03:13.817 LIB libspdk_util.a 00:03:13.817 SO libspdk_util.so.10.1 00:03:13.817 SYMLINK libspdk_util.so 00:03:13.817 LIB libspdk_trace_parser.a 00:03:13.817 SO libspdk_trace_parser.so.6.0 00:03:13.817 CC lib/conf/conf.o 00:03:13.817 CC lib/vmd/vmd.o 00:03:13.817 CC lib/vmd/led.o 00:03:13.817 CC lib/rdma_utils/rdma_utils.o 00:03:13.817 CC lib/env_dpdk/env.o 00:03:13.817 CC lib/env_dpdk/memory.o 00:03:13.817 CC lib/env_dpdk/pci.o 00:03:13.817 CC lib/json/json_parse.o 00:03:13.817 CC lib/idxd/idxd.o 00:03:13.817 SYMLINK libspdk_trace_parser.so 00:03:13.817 CC lib/idxd/idxd_user.o 00:03:13.817 CC lib/idxd/idxd_kernel.o 00:03:13.817 LIB libspdk_conf.a 00:03:13.817 SO libspdk_conf.so.6.0 00:03:13.817 LIB libspdk_rdma_utils.a 00:03:13.817 CC lib/env_dpdk/init.o 00:03:13.817 SO libspdk_rdma_utils.so.1.0 00:03:13.817 CC lib/json/json_util.o 00:03:13.817 SYMLINK libspdk_conf.so 00:03:13.817 CC lib/json/json_write.o 00:03:13.817 SYMLINK libspdk_rdma_utils.so 00:03:13.817 CC lib/env_dpdk/threads.o 00:03:13.817 CC lib/env_dpdk/pci_ioat.o 00:03:13.817 CC lib/rdma_provider/common.o 00:03:13.817 CC lib/env_dpdk/pci_virtio.o 00:03:13.817 CC lib/env_dpdk/pci_vmd.o 00:03:13.817 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:13.817 CC lib/env_dpdk/pci_idxd.o 00:03:13.817 LIB libspdk_json.a 00:03:13.817 CC lib/env_dpdk/pci_event.o 00:03:13.817 SO libspdk_json.so.6.0 00:03:13.817 CC lib/env_dpdk/sigbus_handler.o 00:03:13.817 CC lib/env_dpdk/pci_dpdk.o 00:03:13.817 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:13.817 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:13.817 SYMLINK libspdk_json.so 00:03:13.817 LIB libspdk_rdma_provider.a 00:03:13.817 LIB libspdk_idxd.a 00:03:13.817 LIB libspdk_vmd.a 00:03:13.817 SO libspdk_idxd.so.12.1 00:03:13.817 SO libspdk_rdma_provider.so.7.0 00:03:13.817 SO libspdk_vmd.so.6.0 00:03:13.817 SYMLINK libspdk_rdma_provider.so 00:03:13.817 SYMLINK libspdk_idxd.so 00:03:13.817 SYMLINK libspdk_vmd.so 00:03:14.077 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:14.077 CC lib/jsonrpc/jsonrpc_server.o 00:03:14.077 CC lib/jsonrpc/jsonrpc_client.o 00:03:14.077 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:14.337 LIB libspdk_jsonrpc.a 00:03:14.337 SO libspdk_jsonrpc.so.6.0 00:03:14.337 SYMLINK libspdk_jsonrpc.so 00:03:14.597 CC lib/rpc/rpc.o 00:03:14.597 LIB libspdk_env_dpdk.a 00:03:14.597 SO libspdk_env_dpdk.so.15.1 00:03:14.858 LIB libspdk_rpc.a 00:03:14.858 SYMLINK libspdk_env_dpdk.so 00:03:14.858 SO libspdk_rpc.so.6.0 00:03:14.858 SYMLINK libspdk_rpc.so 00:03:15.120 CC lib/keyring/keyring.o 00:03:15.120 CC lib/keyring/keyring_rpc.o 00:03:15.120 CC lib/trace/trace.o 00:03:15.120 CC lib/trace/trace_rpc.o 00:03:15.120 CC lib/trace/trace_flags.o 00:03:15.120 CC lib/notify/notify.o 00:03:15.120 CC lib/notify/notify_rpc.o 00:03:15.120 LIB libspdk_notify.a 00:03:15.120 SO libspdk_notify.so.6.0 00:03:15.120 LIB libspdk_keyring.a 00:03:15.120 SYMLINK libspdk_notify.so 00:03:15.120 LIB libspdk_trace.a 00:03:15.381 SO libspdk_keyring.so.2.0 00:03:15.381 SO libspdk_trace.so.11.0 00:03:15.381 SYMLINK libspdk_keyring.so 00:03:15.381 SYMLINK libspdk_trace.so 00:03:15.641 CC lib/thread/thread.o 00:03:15.641 CC lib/thread/iobuf.o 00:03:15.641 CC lib/sock/sock_rpc.o 00:03:15.641 CC lib/sock/sock.o 00:03:15.902 LIB libspdk_sock.a 00:03:15.902 SO libspdk_sock.so.10.0 00:03:16.164 SYMLINK libspdk_sock.so 00:03:16.164 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:16.164 CC lib/nvme/nvme_fabric.o 00:03:16.164 CC lib/nvme/nvme_ctrlr.o 00:03:16.164 CC lib/nvme/nvme_ns_cmd.o 00:03:16.164 CC lib/nvme/nvme_pcie.o 00:03:16.164 CC lib/nvme/nvme.o 00:03:16.164 CC lib/nvme/nvme_ns.o 00:03:16.164 CC lib/nvme/nvme_pcie_common.o 00:03:16.164 CC lib/nvme/nvme_qpair.o 00:03:17.106 CC lib/nvme/nvme_quirks.o 00:03:17.106 CC lib/nvme/nvme_transport.o 00:03:17.106 CC lib/nvme/nvme_discovery.o 00:03:17.106 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:17.106 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:17.106 CC lib/nvme/nvme_tcp.o 00:03:17.106 LIB libspdk_thread.a 00:03:17.106 SO libspdk_thread.so.11.0 00:03:17.106 CC lib/nvme/nvme_opal.o 00:03:17.106 SYMLINK libspdk_thread.so 00:03:17.366 CC lib/nvme/nvme_io_msg.o 00:03:17.366 CC lib/accel/accel.o 00:03:17.366 CC lib/nvme/nvme_poll_group.o 00:03:17.366 CC lib/nvme/nvme_zns.o 00:03:17.366 CC lib/nvme/nvme_stubs.o 00:03:17.626 CC lib/nvme/nvme_auth.o 00:03:17.626 CC lib/nvme/nvme_cuse.o 00:03:17.626 CC lib/nvme/nvme_rdma.o 00:03:17.887 CC lib/blob/blobstore.o 00:03:17.887 CC lib/blob/request.o 00:03:17.887 CC lib/blob/zeroes.o 00:03:17.887 CC lib/blob/blob_bs_dev.o 00:03:17.887 CC lib/accel/accel_rpc.o 00:03:18.149 CC lib/accel/accel_sw.o 00:03:18.149 CC lib/init/json_config.o 00:03:18.149 CC lib/init/subsystem.o 00:03:18.149 CC lib/virtio/virtio.o 00:03:18.409 CC lib/virtio/virtio_vhost_user.o 00:03:18.409 CC lib/virtio/virtio_vfio_user.o 00:03:18.409 CC lib/virtio/virtio_pci.o 00:03:18.409 LIB libspdk_accel.a 00:03:18.409 CC lib/init/subsystem_rpc.o 00:03:18.409 CC lib/init/rpc.o 00:03:18.409 SO libspdk_accel.so.16.0 00:03:18.409 SYMLINK libspdk_accel.so 00:03:18.409 CC lib/fsdev/fsdev.o 00:03:18.409 CC lib/fsdev/fsdev_io.o 00:03:18.409 LIB libspdk_init.a 00:03:18.669 CC lib/fsdev/fsdev_rpc.o 00:03:18.669 SO libspdk_init.so.6.0 00:03:18.669 SYMLINK libspdk_init.so 00:03:18.669 CC lib/bdev/bdev.o 00:03:18.669 CC lib/bdev/bdev_rpc.o 00:03:18.669 CC lib/bdev/bdev_zone.o 00:03:18.669 CC lib/bdev/part.o 00:03:18.669 LIB libspdk_virtio.a 00:03:18.669 SO libspdk_virtio.so.7.0 00:03:18.669 CC lib/event/app.o 00:03:18.669 SYMLINK libspdk_virtio.so 00:03:18.669 CC lib/event/reactor.o 00:03:18.669 CC lib/bdev/scsi_nvme.o 00:03:18.929 CC lib/event/log_rpc.o 00:03:18.930 CC lib/event/app_rpc.o 00:03:18.930 CC lib/event/scheduler_static.o 00:03:18.930 LIB libspdk_fsdev.a 00:03:18.930 LIB libspdk_nvme.a 00:03:18.930 SO libspdk_fsdev.so.2.0 00:03:18.930 SYMLINK libspdk_fsdev.so 00:03:19.190 SO libspdk_nvme.so.15.0 00:03:19.190 LIB libspdk_event.a 00:03:19.190 SO libspdk_event.so.14.0 00:03:19.190 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:19.190 SYMLINK libspdk_event.so 00:03:19.190 SYMLINK libspdk_nvme.so 00:03:19.760 LIB libspdk_fuse_dispatcher.a 00:03:19.760 SO libspdk_fuse_dispatcher.so.1.0 00:03:20.021 SYMLINK libspdk_fuse_dispatcher.so 00:03:20.961 LIB libspdk_bdev.a 00:03:20.961 LIB libspdk_blob.a 00:03:20.961 SO libspdk_bdev.so.17.0 00:03:20.961 SO libspdk_blob.so.11.0 00:03:20.961 SYMLINK libspdk_bdev.so 00:03:20.961 SYMLINK libspdk_blob.so 00:03:20.961 CC lib/nvmf/ctrlr.o 00:03:20.961 CC lib/nvmf/ctrlr_bdev.o 00:03:20.961 CC lib/nvmf/ctrlr_discovery.o 00:03:20.961 CC lib/nvmf/subsystem.o 00:03:20.961 CC lib/scsi/dev.o 00:03:20.961 CC lib/ublk/ublk.o 00:03:20.961 CC lib/ftl/ftl_core.o 00:03:20.961 CC lib/nbd/nbd.o 00:03:21.246 CC lib/blobfs/blobfs.o 00:03:21.246 CC lib/lvol/lvol.o 00:03:21.246 CC lib/scsi/lun.o 00:03:21.246 CC lib/ftl/ftl_init.o 00:03:21.509 CC lib/nbd/nbd_rpc.o 00:03:21.509 CC lib/ftl/ftl_layout.o 00:03:21.509 CC lib/ftl/ftl_debug.o 00:03:21.509 CC lib/scsi/port.o 00:03:21.509 CC lib/scsi/scsi.o 00:03:21.509 LIB libspdk_nbd.a 00:03:21.509 SO libspdk_nbd.so.7.0 00:03:21.767 CC lib/ublk/ublk_rpc.o 00:03:21.768 CC lib/ftl/ftl_io.o 00:03:21.768 SYMLINK libspdk_nbd.so 00:03:21.768 CC lib/ftl/ftl_sb.o 00:03:21.768 CC lib/scsi/scsi_bdev.o 00:03:21.768 CC lib/blobfs/tree.o 00:03:21.768 CC lib/scsi/scsi_pr.o 00:03:21.768 CC lib/scsi/scsi_rpc.o 00:03:21.768 LIB libspdk_ublk.a 00:03:21.768 SO libspdk_ublk.so.3.0 00:03:21.768 LIB libspdk_blobfs.a 00:03:21.768 CC lib/ftl/ftl_l2p.o 00:03:21.768 CC lib/ftl/ftl_l2p_flat.o 00:03:21.768 CC lib/ftl/ftl_nv_cache.o 00:03:21.768 SO libspdk_blobfs.so.10.0 00:03:22.028 SYMLINK libspdk_ublk.so 00:03:22.028 CC lib/ftl/ftl_band.o 00:03:22.028 SYMLINK libspdk_blobfs.so 00:03:22.028 CC lib/ftl/ftl_band_ops.o 00:03:22.028 CC lib/scsi/task.o 00:03:22.028 LIB libspdk_lvol.a 00:03:22.028 CC lib/ftl/ftl_writer.o 00:03:22.028 CC lib/ftl/ftl_rq.o 00:03:22.028 SO libspdk_lvol.so.10.0 00:03:22.028 SYMLINK libspdk_lvol.so 00:03:22.028 CC lib/ftl/ftl_reloc.o 00:03:22.028 CC lib/ftl/ftl_l2p_cache.o 00:03:22.287 LIB libspdk_scsi.a 00:03:22.287 CC lib/ftl/ftl_p2l.o 00:03:22.287 CC lib/ftl/ftl_p2l_log.o 00:03:22.287 SO libspdk_scsi.so.9.0 00:03:22.287 CC lib/ftl/mngt/ftl_mngt.o 00:03:22.287 CC lib/nvmf/nvmf.o 00:03:22.287 CC lib/nvmf/nvmf_rpc.o 00:03:22.287 SYMLINK libspdk_scsi.so 00:03:22.287 CC lib/nvmf/transport.o 00:03:22.287 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:22.548 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:22.548 CC lib/nvmf/tcp.o 00:03:22.548 CC lib/iscsi/conn.o 00:03:22.548 CC lib/vhost/vhost.o 00:03:22.548 CC lib/vhost/vhost_rpc.o 00:03:22.810 CC lib/vhost/vhost_scsi.o 00:03:22.810 CC lib/iscsi/init_grp.o 00:03:22.810 CC lib/iscsi/iscsi.o 00:03:22.810 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:22.810 CC lib/iscsi/param.o 00:03:23.071 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:23.071 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:23.071 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:23.071 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:23.071 CC lib/iscsi/portal_grp.o 00:03:23.333 CC lib/nvmf/stubs.o 00:03:23.333 CC lib/nvmf/mdns_server.o 00:03:23.333 CC lib/nvmf/rdma.o 00:03:23.333 CC lib/nvmf/auth.o 00:03:23.333 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:23.333 CC lib/iscsi/tgt_node.o 00:03:23.333 CC lib/iscsi/iscsi_subsystem.o 00:03:23.594 CC lib/vhost/vhost_blk.o 00:03:23.594 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:23.594 CC lib/vhost/rte_vhost_user.o 00:03:23.594 CC lib/iscsi/iscsi_rpc.o 00:03:23.594 CC lib/iscsi/task.o 00:03:23.594 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:23.853 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:23.853 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:23.853 CC lib/ftl/utils/ftl_conf.o 00:03:23.853 CC lib/ftl/utils/ftl_md.o 00:03:23.853 CC lib/ftl/utils/ftl_mempool.o 00:03:23.853 CC lib/ftl/utils/ftl_bitmap.o 00:03:23.853 CC lib/ftl/utils/ftl_property.o 00:03:24.115 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:24.115 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:24.115 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.115 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.115 LIB libspdk_iscsi.a 00:03:24.115 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.115 SO libspdk_iscsi.so.8.0 00:03:24.115 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.115 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:24.115 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.115 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:24.115 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:24.115 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:24.376 SYMLINK libspdk_iscsi.so 00:03:24.376 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:24.376 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:24.376 LIB libspdk_vhost.a 00:03:24.376 CC lib/ftl/base/ftl_base_dev.o 00:03:24.376 CC lib/ftl/base/ftl_base_bdev.o 00:03:24.376 CC lib/ftl/ftl_trace.o 00:03:24.376 SO libspdk_vhost.so.8.0 00:03:24.376 SYMLINK libspdk_vhost.so 00:03:24.637 LIB libspdk_ftl.a 00:03:24.637 SO libspdk_ftl.so.9.0 00:03:24.898 SYMLINK libspdk_ftl.so 00:03:25.160 LIB libspdk_nvmf.a 00:03:25.160 SO libspdk_nvmf.so.20.0 00:03:25.422 SYMLINK libspdk_nvmf.so 00:03:25.684 CC module/env_dpdk/env_dpdk_rpc.o 00:03:25.684 CC module/sock/posix/posix.o 00:03:25.684 CC module/blob/bdev/blob_bdev.o 00:03:25.684 CC module/keyring/linux/keyring.o 00:03:25.684 CC module/fsdev/aio/fsdev_aio.o 00:03:25.684 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:25.684 CC module/accel/ioat/accel_ioat.o 00:03:25.684 CC module/accel/error/accel_error.o 00:03:25.684 CC module/keyring/file/keyring.o 00:03:25.684 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:25.684 LIB libspdk_env_dpdk_rpc.a 00:03:25.684 SO libspdk_env_dpdk_rpc.so.6.0 00:03:25.684 CC module/keyring/linux/keyring_rpc.o 00:03:25.684 CC module/keyring/file/keyring_rpc.o 00:03:25.684 SYMLINK libspdk_env_dpdk_rpc.so 00:03:25.684 LIB libspdk_scheduler_dpdk_governor.a 00:03:25.684 CC module/accel/ioat/accel_ioat_rpc.o 00:03:25.684 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:25.946 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:25.946 LIB libspdk_scheduler_dynamic.a 00:03:25.946 CC module/accel/error/accel_error_rpc.o 00:03:25.946 LIB libspdk_keyring_linux.a 00:03:25.946 LIB libspdk_keyring_file.a 00:03:25.946 SO libspdk_scheduler_dynamic.so.4.0 00:03:25.946 CC module/accel/dsa/accel_dsa.o 00:03:25.946 SO libspdk_keyring_linux.so.1.0 00:03:25.946 SO libspdk_keyring_file.so.2.0 00:03:25.946 LIB libspdk_blob_bdev.a 00:03:25.946 LIB libspdk_accel_ioat.a 00:03:25.946 SO libspdk_blob_bdev.so.11.0 00:03:25.946 SO libspdk_accel_ioat.so.6.0 00:03:25.946 SYMLINK libspdk_scheduler_dynamic.so 00:03:25.946 SYMLINK libspdk_keyring_linux.so 00:03:25.946 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:25.946 CC module/fsdev/aio/linux_aio_mgr.o 00:03:25.946 SYMLINK libspdk_keyring_file.so 00:03:25.946 CC module/scheduler/gscheduler/gscheduler.o 00:03:25.946 LIB libspdk_accel_error.a 00:03:25.946 SYMLINK libspdk_blob_bdev.so 00:03:25.946 SYMLINK libspdk_accel_ioat.so 00:03:25.946 CC module/accel/dsa/accel_dsa_rpc.o 00:03:25.946 SO libspdk_accel_error.so.2.0 00:03:25.946 SYMLINK libspdk_accel_error.so 00:03:25.946 LIB libspdk_scheduler_gscheduler.a 00:03:25.946 CC module/accel/iaa/accel_iaa.o 00:03:26.207 SO libspdk_scheduler_gscheduler.so.4.0 00:03:26.207 LIB libspdk_accel_dsa.a 00:03:26.207 SO libspdk_accel_dsa.so.5.0 00:03:26.207 SYMLINK libspdk_scheduler_gscheduler.so 00:03:26.207 CC module/bdev/delay/vbdev_delay.o 00:03:26.207 SYMLINK libspdk_accel_dsa.so 00:03:26.207 CC module/bdev/error/vbdev_error.o 00:03:26.207 LIB libspdk_fsdev_aio.a 00:03:26.207 CC module/blobfs/bdev/blobfs_bdev.o 00:03:26.207 CC module/bdev/gpt/gpt.o 00:03:26.207 CC module/accel/iaa/accel_iaa_rpc.o 00:03:26.207 CC module/bdev/lvol/vbdev_lvol.o 00:03:26.207 LIB libspdk_sock_posix.a 00:03:26.207 SO libspdk_fsdev_aio.so.1.0 00:03:26.207 CC module/bdev/malloc/bdev_malloc.o 00:03:26.207 SO libspdk_sock_posix.so.6.0 00:03:26.207 SYMLINK libspdk_fsdev_aio.so 00:03:26.207 CC module/bdev/gpt/vbdev_gpt.o 00:03:26.207 CC module/bdev/null/bdev_null.o 00:03:26.207 SYMLINK libspdk_sock_posix.so 00:03:26.207 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:26.466 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:26.466 LIB libspdk_accel_iaa.a 00:03:26.466 CC module/bdev/error/vbdev_error_rpc.o 00:03:26.466 SO libspdk_accel_iaa.so.3.0 00:03:26.466 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:26.466 SYMLINK libspdk_accel_iaa.so 00:03:26.466 LIB libspdk_bdev_gpt.a 00:03:26.466 LIB libspdk_blobfs_bdev.a 00:03:26.466 CC module/bdev/null/bdev_null_rpc.o 00:03:26.466 LIB libspdk_bdev_error.a 00:03:26.466 SO libspdk_bdev_gpt.so.6.0 00:03:26.466 LIB libspdk_bdev_delay.a 00:03:26.466 SO libspdk_blobfs_bdev.so.6.0 00:03:26.466 SO libspdk_bdev_error.so.6.0 00:03:26.466 SO libspdk_bdev_delay.so.6.0 00:03:26.466 SYMLINK libspdk_bdev_gpt.so 00:03:26.466 SYMLINK libspdk_blobfs_bdev.so 00:03:26.724 SYMLINK libspdk_bdev_error.so 00:03:26.724 SYMLINK libspdk_bdev_delay.so 00:03:26.724 CC module/bdev/nvme/bdev_nvme.o 00:03:26.724 CC module/bdev/passthru/vbdev_passthru.o 00:03:26.724 CC module/bdev/raid/bdev_raid.o 00:03:26.724 LIB libspdk_bdev_malloc.a 00:03:26.724 SO libspdk_bdev_malloc.so.6.0 00:03:26.724 LIB libspdk_bdev_null.a 00:03:26.724 SO libspdk_bdev_null.so.6.0 00:03:26.724 SYMLINK libspdk_bdev_malloc.so 00:03:26.724 CC module/bdev/raid/bdev_raid_rpc.o 00:03:26.724 CC module/bdev/split/vbdev_split.o 00:03:26.724 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:26.724 CC module/bdev/aio/bdev_aio.o 00:03:26.724 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:26.724 CC module/bdev/xnvme/bdev_xnvme.o 00:03:26.724 SYMLINK libspdk_bdev_null.so 00:03:26.724 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:26.982 CC module/bdev/split/vbdev_split_rpc.o 00:03:26.982 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:26.982 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:26.982 LIB libspdk_bdev_split.a 00:03:26.982 CC module/bdev/aio/bdev_aio_rpc.o 00:03:26.982 SO libspdk_bdev_split.so.6.0 00:03:26.982 LIB libspdk_bdev_xnvme.a 00:03:26.982 CC module/bdev/ftl/bdev_ftl.o 00:03:26.982 LIB libspdk_bdev_lvol.a 00:03:26.982 SO libspdk_bdev_xnvme.so.3.0 00:03:26.982 SYMLINK libspdk_bdev_split.so 00:03:26.982 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:26.982 SO libspdk_bdev_lvol.so.6.0 00:03:26.982 LIB libspdk_bdev_passthru.a 00:03:26.982 SO libspdk_bdev_passthru.so.6.0 00:03:26.982 SYMLINK libspdk_bdev_xnvme.so 00:03:26.982 LIB libspdk_bdev_aio.a 00:03:26.982 SYMLINK libspdk_bdev_lvol.so 00:03:26.982 CC module/bdev/nvme/nvme_rpc.o 00:03:26.982 SO libspdk_bdev_aio.so.6.0 00:03:27.240 SYMLINK libspdk_bdev_passthru.so 00:03:27.240 CC module/bdev/raid/bdev_raid_sb.o 00:03:27.240 SYMLINK libspdk_bdev_aio.so 00:03:27.240 CC module/bdev/raid/raid0.o 00:03:27.240 CC module/bdev/iscsi/bdev_iscsi.o 00:03:27.240 LIB libspdk_bdev_zone_block.a 00:03:27.240 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:27.240 SO libspdk_bdev_zone_block.so.6.0 00:03:27.240 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:27.240 SYMLINK libspdk_bdev_zone_block.so 00:03:27.240 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:27.240 CC module/bdev/nvme/bdev_mdns_client.o 00:03:27.240 CC module/bdev/nvme/vbdev_opal.o 00:03:27.240 CC module/bdev/raid/raid1.o 00:03:27.498 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:27.498 LIB libspdk_bdev_ftl.a 00:03:27.498 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:27.498 SO libspdk_bdev_ftl.so.6.0 00:03:27.498 CC module/bdev/raid/concat.o 00:03:27.498 LIB libspdk_bdev_iscsi.a 00:03:27.498 SYMLINK libspdk_bdev_ftl.so 00:03:27.498 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:27.498 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:27.498 SO libspdk_bdev_iscsi.so.6.0 00:03:27.498 SYMLINK libspdk_bdev_iscsi.so 00:03:27.756 LIB libspdk_bdev_raid.a 00:03:27.756 LIB libspdk_bdev_virtio.a 00:03:27.756 SO libspdk_bdev_raid.so.6.0 00:03:27.756 SO libspdk_bdev_virtio.so.6.0 00:03:27.756 SYMLINK libspdk_bdev_raid.so 00:03:27.756 SYMLINK libspdk_bdev_virtio.so 00:03:28.692 LIB libspdk_bdev_nvme.a 00:03:28.692 SO libspdk_bdev_nvme.so.7.1 00:03:28.692 SYMLINK libspdk_bdev_nvme.so 00:03:28.952 CC module/event/subsystems/vmd/vmd.o 00:03:28.952 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:28.952 CC module/event/subsystems/scheduler/scheduler.o 00:03:28.952 CC module/event/subsystems/sock/sock.o 00:03:28.952 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:28.952 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:28.952 CC module/event/subsystems/iobuf/iobuf.o 00:03:28.952 CC module/event/subsystems/fsdev/fsdev.o 00:03:28.952 CC module/event/subsystems/keyring/keyring.o 00:03:29.210 LIB libspdk_event_fsdev.a 00:03:29.210 LIB libspdk_event_scheduler.a 00:03:29.210 LIB libspdk_event_keyring.a 00:03:29.210 LIB libspdk_event_vhost_blk.a 00:03:29.210 SO libspdk_event_fsdev.so.1.0 00:03:29.210 LIB libspdk_event_sock.a 00:03:29.210 LIB libspdk_event_vmd.a 00:03:29.210 SO libspdk_event_keyring.so.1.0 00:03:29.210 SO libspdk_event_scheduler.so.4.0 00:03:29.210 LIB libspdk_event_iobuf.a 00:03:29.210 SO libspdk_event_sock.so.5.0 00:03:29.210 SO libspdk_event_vhost_blk.so.3.0 00:03:29.210 SO libspdk_event_vmd.so.6.0 00:03:29.210 SYMLINK libspdk_event_fsdev.so 00:03:29.210 SO libspdk_event_iobuf.so.3.0 00:03:29.210 SYMLINK libspdk_event_keyring.so 00:03:29.210 SYMLINK libspdk_event_scheduler.so 00:03:29.210 SYMLINK libspdk_event_vhost_blk.so 00:03:29.210 SYMLINK libspdk_event_sock.so 00:03:29.210 SYMLINK libspdk_event_vmd.so 00:03:29.210 SYMLINK libspdk_event_iobuf.so 00:03:29.468 CC module/event/subsystems/accel/accel.o 00:03:29.727 LIB libspdk_event_accel.a 00:03:29.727 SO libspdk_event_accel.so.6.0 00:03:29.727 SYMLINK libspdk_event_accel.so 00:03:29.998 CC module/event/subsystems/bdev/bdev.o 00:03:29.998 LIB libspdk_event_bdev.a 00:03:29.998 SO libspdk_event_bdev.so.6.0 00:03:29.998 SYMLINK libspdk_event_bdev.so 00:03:30.256 CC module/event/subsystems/ublk/ublk.o 00:03:30.256 CC module/event/subsystems/scsi/scsi.o 00:03:30.256 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:30.256 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:30.256 CC module/event/subsystems/nbd/nbd.o 00:03:30.256 LIB libspdk_event_nbd.a 00:03:30.513 LIB libspdk_event_ublk.a 00:03:30.513 LIB libspdk_event_scsi.a 00:03:30.513 SO libspdk_event_nbd.so.6.0 00:03:30.513 SO libspdk_event_ublk.so.3.0 00:03:30.513 SO libspdk_event_scsi.so.6.0 00:03:30.513 LIB libspdk_event_nvmf.a 00:03:30.513 SYMLINK libspdk_event_nbd.so 00:03:30.513 SYMLINK libspdk_event_scsi.so 00:03:30.513 SYMLINK libspdk_event_ublk.so 00:03:30.513 SO libspdk_event_nvmf.so.6.0 00:03:30.513 SYMLINK libspdk_event_nvmf.so 00:03:30.513 CC module/event/subsystems/iscsi/iscsi.o 00:03:30.513 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:30.770 LIB libspdk_event_iscsi.a 00:03:30.770 LIB libspdk_event_vhost_scsi.a 00:03:30.770 SO libspdk_event_iscsi.so.6.0 00:03:30.770 SO libspdk_event_vhost_scsi.so.3.0 00:03:30.770 SYMLINK libspdk_event_iscsi.so 00:03:30.770 SYMLINK libspdk_event_vhost_scsi.so 00:03:31.028 SO libspdk.so.6.0 00:03:31.028 SYMLINK libspdk.so 00:03:31.028 TEST_HEADER include/spdk/accel.h 00:03:31.028 TEST_HEADER include/spdk/accel_module.h 00:03:31.028 CXX app/trace/trace.o 00:03:31.028 TEST_HEADER include/spdk/assert.h 00:03:31.028 TEST_HEADER include/spdk/barrier.h 00:03:31.028 TEST_HEADER include/spdk/base64.h 00:03:31.028 CC test/rpc_client/rpc_client_test.o 00:03:31.028 TEST_HEADER include/spdk/bdev.h 00:03:31.028 TEST_HEADER include/spdk/bdev_module.h 00:03:31.028 CC app/trace_record/trace_record.o 00:03:31.028 TEST_HEADER include/spdk/bdev_zone.h 00:03:31.028 TEST_HEADER include/spdk/bit_array.h 00:03:31.028 TEST_HEADER include/spdk/bit_pool.h 00:03:31.028 TEST_HEADER include/spdk/blob_bdev.h 00:03:31.028 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:31.028 TEST_HEADER include/spdk/blobfs.h 00:03:31.028 TEST_HEADER include/spdk/blob.h 00:03:31.028 TEST_HEADER include/spdk/conf.h 00:03:31.028 TEST_HEADER include/spdk/config.h 00:03:31.028 TEST_HEADER include/spdk/cpuset.h 00:03:31.028 TEST_HEADER include/spdk/crc16.h 00:03:31.286 TEST_HEADER include/spdk/crc32.h 00:03:31.286 TEST_HEADER include/spdk/crc64.h 00:03:31.286 CC app/nvmf_tgt/nvmf_main.o 00:03:31.286 TEST_HEADER include/spdk/dif.h 00:03:31.286 TEST_HEADER include/spdk/dma.h 00:03:31.286 TEST_HEADER include/spdk/endian.h 00:03:31.286 TEST_HEADER include/spdk/env_dpdk.h 00:03:31.286 TEST_HEADER include/spdk/env.h 00:03:31.286 TEST_HEADER include/spdk/event.h 00:03:31.286 TEST_HEADER include/spdk/fd_group.h 00:03:31.286 TEST_HEADER include/spdk/fd.h 00:03:31.286 TEST_HEADER include/spdk/file.h 00:03:31.286 TEST_HEADER include/spdk/fsdev.h 00:03:31.286 CC test/thread/poller_perf/poller_perf.o 00:03:31.286 TEST_HEADER include/spdk/fsdev_module.h 00:03:31.286 TEST_HEADER include/spdk/ftl.h 00:03:31.286 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:31.286 TEST_HEADER include/spdk/gpt_spec.h 00:03:31.286 TEST_HEADER include/spdk/hexlify.h 00:03:31.286 TEST_HEADER include/spdk/histogram_data.h 00:03:31.286 TEST_HEADER include/spdk/idxd.h 00:03:31.286 TEST_HEADER include/spdk/idxd_spec.h 00:03:31.286 TEST_HEADER include/spdk/init.h 00:03:31.286 TEST_HEADER include/spdk/ioat.h 00:03:31.286 CC examples/util/zipf/zipf.o 00:03:31.286 TEST_HEADER include/spdk/ioat_spec.h 00:03:31.286 TEST_HEADER include/spdk/iscsi_spec.h 00:03:31.286 TEST_HEADER include/spdk/json.h 00:03:31.286 TEST_HEADER include/spdk/jsonrpc.h 00:03:31.286 TEST_HEADER include/spdk/keyring.h 00:03:31.286 TEST_HEADER include/spdk/keyring_module.h 00:03:31.286 TEST_HEADER include/spdk/likely.h 00:03:31.286 TEST_HEADER include/spdk/log.h 00:03:31.286 CC test/dma/test_dma/test_dma.o 00:03:31.286 TEST_HEADER include/spdk/lvol.h 00:03:31.286 TEST_HEADER include/spdk/md5.h 00:03:31.286 TEST_HEADER include/spdk/memory.h 00:03:31.286 TEST_HEADER include/spdk/mmio.h 00:03:31.286 TEST_HEADER include/spdk/nbd.h 00:03:31.286 CC test/app/bdev_svc/bdev_svc.o 00:03:31.286 TEST_HEADER include/spdk/net.h 00:03:31.286 TEST_HEADER include/spdk/notify.h 00:03:31.286 TEST_HEADER include/spdk/nvme.h 00:03:31.286 TEST_HEADER include/spdk/nvme_intel.h 00:03:31.286 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:31.286 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:31.286 TEST_HEADER include/spdk/nvme_spec.h 00:03:31.286 TEST_HEADER include/spdk/nvme_zns.h 00:03:31.286 CC test/env/mem_callbacks/mem_callbacks.o 00:03:31.286 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:31.286 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:31.286 TEST_HEADER include/spdk/nvmf.h 00:03:31.286 TEST_HEADER include/spdk/nvmf_spec.h 00:03:31.286 TEST_HEADER include/spdk/nvmf_transport.h 00:03:31.286 TEST_HEADER include/spdk/opal.h 00:03:31.286 TEST_HEADER include/spdk/opal_spec.h 00:03:31.286 TEST_HEADER include/spdk/pci_ids.h 00:03:31.286 TEST_HEADER include/spdk/pipe.h 00:03:31.286 TEST_HEADER include/spdk/queue.h 00:03:31.286 TEST_HEADER include/spdk/reduce.h 00:03:31.286 TEST_HEADER include/spdk/rpc.h 00:03:31.286 TEST_HEADER include/spdk/scheduler.h 00:03:31.286 TEST_HEADER include/spdk/scsi.h 00:03:31.286 TEST_HEADER include/spdk/scsi_spec.h 00:03:31.286 TEST_HEADER include/spdk/sock.h 00:03:31.286 TEST_HEADER include/spdk/stdinc.h 00:03:31.286 TEST_HEADER include/spdk/string.h 00:03:31.286 TEST_HEADER include/spdk/thread.h 00:03:31.286 TEST_HEADER include/spdk/trace.h 00:03:31.286 LINK rpc_client_test 00:03:31.286 TEST_HEADER include/spdk/trace_parser.h 00:03:31.286 TEST_HEADER include/spdk/tree.h 00:03:31.286 TEST_HEADER include/spdk/ublk.h 00:03:31.286 LINK poller_perf 00:03:31.286 TEST_HEADER include/spdk/util.h 00:03:31.286 TEST_HEADER include/spdk/uuid.h 00:03:31.286 TEST_HEADER include/spdk/version.h 00:03:31.286 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:31.286 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:31.286 TEST_HEADER include/spdk/vhost.h 00:03:31.286 TEST_HEADER include/spdk/vmd.h 00:03:31.286 TEST_HEADER include/spdk/xor.h 00:03:31.286 TEST_HEADER include/spdk/zipf.h 00:03:31.286 CXX test/cpp_headers/accel.o 00:03:31.286 LINK nvmf_tgt 00:03:31.286 LINK zipf 00:03:31.286 LINK spdk_trace_record 00:03:31.544 LINK spdk_trace 00:03:31.544 LINK bdev_svc 00:03:31.544 CXX test/cpp_headers/accel_module.o 00:03:31.544 CC test/app/histogram_perf/histogram_perf.o 00:03:31.544 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:31.544 CC examples/ioat/perf/perf.o 00:03:31.544 CXX test/cpp_headers/assert.o 00:03:31.544 CC test/env/vtophys/vtophys.o 00:03:31.544 CC app/iscsi_tgt/iscsi_tgt.o 00:03:31.544 CC test/event/event_perf/event_perf.o 00:03:31.802 LINK histogram_perf 00:03:31.802 LINK test_dma 00:03:31.802 LINK mem_callbacks 00:03:31.802 CXX test/cpp_headers/barrier.o 00:03:31.802 CC app/spdk_tgt/spdk_tgt.o 00:03:31.802 LINK vtophys 00:03:31.802 LINK ioat_perf 00:03:31.802 LINK event_perf 00:03:31.802 CC app/spdk_lspci/spdk_lspci.o 00:03:31.802 LINK iscsi_tgt 00:03:31.802 CXX test/cpp_headers/base64.o 00:03:31.802 CC test/app/jsoncat/jsoncat.o 00:03:31.802 LINK nvme_fuzz 00:03:31.802 CC test/app/stub/stub.o 00:03:32.060 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:32.060 LINK spdk_lspci 00:03:32.060 LINK spdk_tgt 00:03:32.060 CC examples/ioat/verify/verify.o 00:03:32.060 CXX test/cpp_headers/bdev.o 00:03:32.060 CC test/event/reactor/reactor.o 00:03:32.060 LINK jsoncat 00:03:32.060 LINK env_dpdk_post_init 00:03:32.060 CC app/spdk_nvme_perf/perf.o 00:03:32.060 CXX test/cpp_headers/bdev_module.o 00:03:32.060 LINK stub 00:03:32.060 LINK reactor 00:03:32.060 CXX test/cpp_headers/bdev_zone.o 00:03:32.060 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:32.060 LINK verify 00:03:32.060 CXX test/cpp_headers/bit_array.o 00:03:32.060 CC test/event/reactor_perf/reactor_perf.o 00:03:32.060 CXX test/cpp_headers/bit_pool.o 00:03:32.317 CC test/env/memory/memory_ut.o 00:03:32.317 CXX test/cpp_headers/blob_bdev.o 00:03:32.317 LINK reactor_perf 00:03:32.317 CC test/event/app_repeat/app_repeat.o 00:03:32.317 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:32.317 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:32.317 CC examples/vmd/lsvmd/lsvmd.o 00:03:32.317 CC examples/idxd/perf/perf.o 00:03:32.317 CXX test/cpp_headers/blobfs_bdev.o 00:03:32.317 CXX test/cpp_headers/blobfs.o 00:03:32.575 LINK app_repeat 00:03:32.575 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:32.575 LINK lsvmd 00:03:32.575 LINK interrupt_tgt 00:03:32.575 CXX test/cpp_headers/blob.o 00:03:32.575 LINK idxd_perf 00:03:32.575 CC app/spdk_nvme_identify/identify.o 00:03:32.575 CC test/event/scheduler/scheduler.o 00:03:32.575 CXX test/cpp_headers/conf.o 00:03:32.575 CC examples/vmd/led/led.o 00:03:32.833 CXX test/cpp_headers/config.o 00:03:32.833 LINK led 00:03:32.833 CXX test/cpp_headers/cpuset.o 00:03:32.833 CC app/spdk_nvme_discover/discovery_aer.o 00:03:32.833 CC examples/thread/thread/thread_ex.o 00:03:32.833 LINK scheduler 00:03:32.833 LINK vhost_fuzz 00:03:32.833 LINK spdk_nvme_perf 00:03:32.833 CXX test/cpp_headers/crc16.o 00:03:32.833 CXX test/cpp_headers/crc32.o 00:03:33.091 CC app/spdk_top/spdk_top.o 00:03:33.091 LINK spdk_nvme_discover 00:03:33.091 LINK memory_ut 00:03:33.091 LINK thread 00:03:33.091 CXX test/cpp_headers/crc64.o 00:03:33.091 CXX test/cpp_headers/dif.o 00:03:33.091 CC app/vhost/vhost.o 00:03:33.091 CC examples/sock/hello_world/hello_sock.o 00:03:33.091 CC app/spdk_dd/spdk_dd.o 00:03:33.349 LINK spdk_nvme_identify 00:03:33.349 CXX test/cpp_headers/dma.o 00:03:33.349 CC test/env/pci/pci_ut.o 00:03:33.349 LINK vhost 00:03:33.349 CC app/fio/nvme/fio_plugin.o 00:03:33.349 CXX test/cpp_headers/endian.o 00:03:33.349 CC test/accel/dif/dif.o 00:03:33.349 LINK hello_sock 00:03:33.349 CXX test/cpp_headers/env_dpdk.o 00:03:33.608 LINK spdk_dd 00:03:33.608 CC test/blobfs/mkfs/mkfs.o 00:03:33.608 CXX test/cpp_headers/env.o 00:03:33.608 CC examples/accel/perf/accel_perf.o 00:03:33.608 CC examples/blob/hello_world/hello_blob.o 00:03:33.608 LINK pci_ut 00:03:33.608 CXX test/cpp_headers/event.o 00:03:33.608 LINK mkfs 00:03:33.866 CC examples/blob/cli/blobcli.o 00:03:33.866 LINK iscsi_fuzz 00:03:33.866 LINK spdk_nvme 00:03:33.866 LINK spdk_top 00:03:33.866 CXX test/cpp_headers/fd_group.o 00:03:33.866 LINK hello_blob 00:03:33.866 LINK dif 00:03:33.866 CC app/fio/bdev/fio_plugin.o 00:03:33.866 CC examples/nvme/hello_world/hello_world.o 00:03:34.124 LINK accel_perf 00:03:34.124 CXX test/cpp_headers/fd.o 00:03:34.124 CC test/nvme/aer/aer.o 00:03:34.124 CC examples/nvme/reconnect/reconnect.o 00:03:34.124 CXX test/cpp_headers/file.o 00:03:34.124 CC test/lvol/esnap/esnap.o 00:03:34.124 CC test/nvme/reset/reset.o 00:03:34.124 CXX test/cpp_headers/fsdev.o 00:03:34.124 LINK hello_world 00:03:34.124 CC test/nvme/e2edp/nvme_dp.o 00:03:34.124 CC test/nvme/sgl/sgl.o 00:03:34.124 LINK blobcli 00:03:34.382 LINK aer 00:03:34.382 LINK spdk_bdev 00:03:34.382 LINK reconnect 00:03:34.382 CXX test/cpp_headers/fsdev_module.o 00:03:34.382 LINK reset 00:03:34.382 CXX test/cpp_headers/ftl.o 00:03:34.382 CC test/nvme/overhead/overhead.o 00:03:34.382 CC test/nvme/err_injection/err_injection.o 00:03:34.382 LINK sgl 00:03:34.382 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:34.382 LINK nvme_dp 00:03:34.382 CXX test/cpp_headers/fuse_dispatcher.o 00:03:34.641 CC test/bdev/bdevio/bdevio.o 00:03:34.641 CC examples/nvme/arbitration/arbitration.o 00:03:34.641 CC examples/nvme/hotplug/hotplug.o 00:03:34.641 LINK err_injection 00:03:34.641 LINK overhead 00:03:34.641 CXX test/cpp_headers/gpt_spec.o 00:03:34.641 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:34.641 CC examples/nvme/abort/abort.o 00:03:34.641 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:34.641 CXX test/cpp_headers/hexlify.o 00:03:34.900 LINK hotplug 00:03:34.900 LINK cmb_copy 00:03:34.900 CC test/nvme/startup/startup.o 00:03:34.900 LINK bdevio 00:03:34.900 LINK arbitration 00:03:34.900 LINK nvme_manage 00:03:34.900 CXX test/cpp_headers/histogram_data.o 00:03:34.900 LINK pmr_persistence 00:03:34.900 CXX test/cpp_headers/idxd.o 00:03:34.900 CXX test/cpp_headers/idxd_spec.o 00:03:34.900 LINK abort 00:03:34.900 CXX test/cpp_headers/init.o 00:03:34.900 CXX test/cpp_headers/ioat.o 00:03:34.901 CXX test/cpp_headers/ioat_spec.o 00:03:34.901 LINK startup 00:03:34.901 CXX test/cpp_headers/iscsi_spec.o 00:03:34.901 CXX test/cpp_headers/json.o 00:03:35.161 CC test/nvme/reserve/reserve.o 00:03:35.161 CXX test/cpp_headers/jsonrpc.o 00:03:35.161 CXX test/cpp_headers/keyring.o 00:03:35.161 CXX test/cpp_headers/keyring_module.o 00:03:35.161 CXX test/cpp_headers/likely.o 00:03:35.161 CXX test/cpp_headers/log.o 00:03:35.161 CXX test/cpp_headers/lvol.o 00:03:35.161 CC test/nvme/simple_copy/simple_copy.o 00:03:35.161 CXX test/cpp_headers/md5.o 00:03:35.161 CXX test/cpp_headers/memory.o 00:03:35.161 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:35.161 CXX test/cpp_headers/mmio.o 00:03:35.161 CC test/nvme/connect_stress/connect_stress.o 00:03:35.161 CXX test/cpp_headers/nbd.o 00:03:35.161 LINK reserve 00:03:35.422 CXX test/cpp_headers/net.o 00:03:35.422 LINK simple_copy 00:03:35.422 CXX test/cpp_headers/notify.o 00:03:35.422 CC examples/bdev/bdevperf/bdevperf.o 00:03:35.422 CC examples/bdev/hello_world/hello_bdev.o 00:03:35.422 LINK connect_stress 00:03:35.422 CXX test/cpp_headers/nvme.o 00:03:35.422 LINK hello_fsdev 00:03:35.422 CC test/nvme/boot_partition/boot_partition.o 00:03:35.422 CXX test/cpp_headers/nvme_intel.o 00:03:35.422 CC test/nvme/compliance/nvme_compliance.o 00:03:35.422 CC test/nvme/fused_ordering/fused_ordering.o 00:03:35.422 CXX test/cpp_headers/nvme_ocssd.o 00:03:35.422 LINK hello_bdev 00:03:35.682 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:35.682 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:35.682 LINK boot_partition 00:03:35.682 CC test/nvme/fdp/fdp.o 00:03:35.682 CXX test/cpp_headers/nvme_spec.o 00:03:35.682 LINK fused_ordering 00:03:35.682 CXX test/cpp_headers/nvme_zns.o 00:03:35.682 LINK doorbell_aers 00:03:35.682 CXX test/cpp_headers/nvmf_cmd.o 00:03:35.682 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:35.682 LINK nvme_compliance 00:03:35.682 CC test/nvme/cuse/cuse.o 00:03:35.682 CXX test/cpp_headers/nvmf.o 00:03:35.682 CXX test/cpp_headers/nvmf_spec.o 00:03:35.940 CXX test/cpp_headers/nvmf_transport.o 00:03:35.940 CXX test/cpp_headers/opal.o 00:03:35.940 LINK fdp 00:03:35.940 CXX test/cpp_headers/opal_spec.o 00:03:35.940 CXX test/cpp_headers/pci_ids.o 00:03:35.940 CXX test/cpp_headers/pipe.o 00:03:35.940 CXX test/cpp_headers/queue.o 00:03:35.940 CXX test/cpp_headers/reduce.o 00:03:35.940 CXX test/cpp_headers/rpc.o 00:03:35.940 CXX test/cpp_headers/scheduler.o 00:03:35.940 CXX test/cpp_headers/scsi_spec.o 00:03:35.940 CXX test/cpp_headers/scsi.o 00:03:35.940 CXX test/cpp_headers/sock.o 00:03:35.940 CXX test/cpp_headers/stdinc.o 00:03:36.198 CXX test/cpp_headers/string.o 00:03:36.198 CXX test/cpp_headers/thread.o 00:03:36.198 CXX test/cpp_headers/trace.o 00:03:36.198 CXX test/cpp_headers/trace_parser.o 00:03:36.198 CXX test/cpp_headers/tree.o 00:03:36.198 CXX test/cpp_headers/ublk.o 00:03:36.198 CXX test/cpp_headers/util.o 00:03:36.198 CXX test/cpp_headers/uuid.o 00:03:36.198 LINK bdevperf 00:03:36.198 CXX test/cpp_headers/version.o 00:03:36.198 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.198 CXX test/cpp_headers/vfio_user_spec.o 00:03:36.198 CXX test/cpp_headers/vhost.o 00:03:36.198 CXX test/cpp_headers/vmd.o 00:03:36.198 CXX test/cpp_headers/xor.o 00:03:36.198 CXX test/cpp_headers/zipf.o 00:03:36.457 CC examples/nvmf/nvmf/nvmf.o 00:03:36.715 LINK nvmf 00:03:36.974 LINK cuse 00:03:38.349 LINK esnap 00:03:38.917 00:03:38.917 real 1m2.844s 00:03:38.917 user 5m53.331s 00:03:38.917 sys 1m1.153s 00:03:38.917 ************************************ 00:03:38.917 END TEST make 00:03:38.917 ************************************ 00:03:38.917 14:40:24 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:38.917 14:40:24 make -- common/autotest_common.sh@10 -- $ set +x 00:03:38.917 14:40:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:38.917 14:40:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:38.917 14:40:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:38.917 14:40:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.917 14:40:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:38.917 14:40:24 -- pm/common@44 -- $ pid=5073 00:03:38.917 14:40:24 -- pm/common@50 -- $ kill -TERM 5073 00:03:38.917 14:40:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.917 14:40:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:38.917 14:40:24 -- pm/common@44 -- $ pid=5074 00:03:38.917 14:40:24 -- pm/common@50 -- $ kill -TERM 5074 00:03:38.917 14:40:24 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:38.917 14:40:24 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:38.917 14:40:24 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:38.917 14:40:24 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:38.917 14:40:24 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:38.917 14:40:24 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:38.917 14:40:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:38.917 14:40:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:38.917 14:40:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:38.917 14:40:24 -- scripts/common.sh@336 -- # IFS=.-: 00:03:38.917 14:40:24 -- scripts/common.sh@336 -- # read -ra ver1 00:03:38.917 14:40:24 -- scripts/common.sh@337 -- # IFS=.-: 00:03:38.917 14:40:24 -- scripts/common.sh@337 -- # read -ra ver2 00:03:38.917 14:40:24 -- scripts/common.sh@338 -- # local 'op=<' 00:03:38.917 14:40:24 -- scripts/common.sh@340 -- # ver1_l=2 00:03:38.917 14:40:24 -- scripts/common.sh@341 -- # ver2_l=1 00:03:38.917 14:40:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:38.917 14:40:24 -- scripts/common.sh@344 -- # case "$op" in 00:03:38.917 14:40:24 -- scripts/common.sh@345 -- # : 1 00:03:38.917 14:40:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:38.917 14:40:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:38.917 14:40:24 -- scripts/common.sh@365 -- # decimal 1 00:03:38.917 14:40:24 -- scripts/common.sh@353 -- # local d=1 00:03:38.917 14:40:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:38.917 14:40:24 -- scripts/common.sh@355 -- # echo 1 00:03:38.917 14:40:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:38.917 14:40:24 -- scripts/common.sh@366 -- # decimal 2 00:03:38.917 14:40:24 -- scripts/common.sh@353 -- # local d=2 00:03:38.917 14:40:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:38.917 14:40:24 -- scripts/common.sh@355 -- # echo 2 00:03:38.917 14:40:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:38.917 14:40:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:38.917 14:40:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:38.917 14:40:24 -- scripts/common.sh@368 -- # return 0 00:03:38.917 14:40:24 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:38.917 14:40:24 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:38.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.917 --rc genhtml_branch_coverage=1 00:03:38.917 --rc genhtml_function_coverage=1 00:03:38.917 --rc genhtml_legend=1 00:03:38.917 --rc geninfo_all_blocks=1 00:03:38.917 --rc geninfo_unexecuted_blocks=1 00:03:38.917 00:03:38.917 ' 00:03:38.917 14:40:24 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:38.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.917 --rc genhtml_branch_coverage=1 00:03:38.917 --rc genhtml_function_coverage=1 00:03:38.917 --rc genhtml_legend=1 00:03:38.917 --rc geninfo_all_blocks=1 00:03:38.917 --rc geninfo_unexecuted_blocks=1 00:03:38.917 00:03:38.917 ' 00:03:38.917 14:40:24 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:38.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.917 --rc genhtml_branch_coverage=1 00:03:38.917 --rc genhtml_function_coverage=1 00:03:38.917 --rc genhtml_legend=1 00:03:38.917 --rc geninfo_all_blocks=1 00:03:38.917 --rc geninfo_unexecuted_blocks=1 00:03:38.917 00:03:38.917 ' 00:03:38.917 14:40:24 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:38.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.917 --rc genhtml_branch_coverage=1 00:03:38.917 --rc genhtml_function_coverage=1 00:03:38.917 --rc genhtml_legend=1 00:03:38.917 --rc geninfo_all_blocks=1 00:03:38.917 --rc geninfo_unexecuted_blocks=1 00:03:38.917 00:03:38.917 ' 00:03:38.917 14:40:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:38.917 14:40:24 -- nvmf/common.sh@7 -- # uname -s 00:03:38.917 14:40:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:38.917 14:40:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:38.917 14:40:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:38.917 14:40:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:38.917 14:40:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:38.917 14:40:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:38.917 14:40:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:38.917 14:40:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:38.917 14:40:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:38.917 14:40:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:38.917 14:40:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4954d9b-ee6f-472e-b8b2-399f71fd8a7f 00:03:38.917 14:40:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=a4954d9b-ee6f-472e-b8b2-399f71fd8a7f 00:03:38.917 14:40:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:38.917 14:40:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:38.917 14:40:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:38.917 14:40:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:38.917 14:40:24 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:38.917 14:40:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:38.917 14:40:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:38.917 14:40:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:38.917 14:40:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:38.917 14:40:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.917 14:40:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.917 14:40:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.917 14:40:24 -- paths/export.sh@5 -- # export PATH 00:03:38.917 14:40:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.917 14:40:24 -- nvmf/common.sh@51 -- # : 0 00:03:38.917 14:40:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:38.917 14:40:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:38.917 14:40:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:38.917 14:40:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:38.917 14:40:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:38.917 14:40:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:38.917 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:38.917 14:40:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:38.917 14:40:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:38.917 14:40:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:38.917 14:40:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:38.917 14:40:24 -- spdk/autotest.sh@32 -- # uname -s 00:03:38.917 14:40:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:38.917 14:40:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:38.917 14:40:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:38.917 14:40:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:38.917 14:40:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:38.917 14:40:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:38.917 14:40:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:38.917 14:40:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:38.917 14:40:24 -- spdk/autotest.sh@48 -- # udevadm_pid=54183 00:03:38.917 14:40:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:38.917 14:40:24 -- pm/common@17 -- # local monitor 00:03:38.917 14:40:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.918 14:40:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:38.918 14:40:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.918 14:40:24 -- pm/common@25 -- # sleep 1 00:03:38.918 14:40:24 -- pm/common@21 -- # date +%s 00:03:38.918 14:40:24 -- pm/common@21 -- # date +%s 00:03:38.918 14:40:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731854424 00:03:38.918 14:40:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731854424 00:03:38.918 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731854424_collect-vmstat.pm.log 00:03:38.918 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731854424_collect-cpu-load.pm.log 00:03:40.292 14:40:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:40.292 14:40:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:40.292 14:40:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.292 14:40:25 -- common/autotest_common.sh@10 -- # set +x 00:03:40.292 14:40:25 -- spdk/autotest.sh@59 -- # create_test_list 00:03:40.292 14:40:25 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:40.292 14:40:25 -- common/autotest_common.sh@10 -- # set +x 00:03:40.292 14:40:25 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:40.292 14:40:25 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:40.292 14:40:25 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:40.292 14:40:25 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:40.292 14:40:25 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:40.292 14:40:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:40.292 14:40:25 -- common/autotest_common.sh@1457 -- # uname 00:03:40.292 14:40:25 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:40.292 14:40:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:40.292 14:40:25 -- common/autotest_common.sh@1477 -- # uname 00:03:40.292 14:40:25 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:40.292 14:40:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:40.292 14:40:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:40.292 lcov: LCOV version 1.15 00:03:40.292 14:40:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:55.186 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:55.186 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:10.141 14:40:54 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:10.141 14:40:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.141 14:40:54 -- common/autotest_common.sh@10 -- # set +x 00:04:10.141 14:40:54 -- spdk/autotest.sh@78 -- # rm -f 00:04:10.141 14:40:54 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.141 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:10.141 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:10.141 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:10.141 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:10.141 14:40:55 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:10.141 14:40:55 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:10.141 14:40:55 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:10.141 14:40:55 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:10.141 14:40:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:10.141 14:40:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:10.141 14:40:55 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:10.141 14:40:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.141 14:40:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:10.141 14:40:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:10.141 14:40:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:10.141 14:40:55 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:10.141 14:40:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:10.141 14:40:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:10.141 14:40:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:10.141 14:40:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:10.141 14:40:55 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:10.141 14:40:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:10.142 14:40:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:10.142 14:40:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:10.142 14:40:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:10.142 14:40:55 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:10.142 14:40:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:10.142 14:40:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:10.142 14:40:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:10.142 14:40:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:10.142 14:40:55 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:10.142 14:40:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:10.142 14:40:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:10.142 14:40:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:10.142 14:40:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:10.142 14:40:55 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:10.142 14:40:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:10.142 14:40:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:10.142 14:40:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:10.142 14:40:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:10.142 14:40:55 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:10.142 14:40:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:10.142 14:40:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:10.142 14:40:55 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:10.142 14:40:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.142 14:40:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.142 14:40:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:10.142 14:40:55 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:10.142 14:40:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:10.403 No valid GPT data, bailing 00:04:10.403 14:40:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:10.403 14:40:55 -- scripts/common.sh@394 -- # pt= 00:04:10.403 14:40:55 -- scripts/common.sh@395 -- # return 1 00:04:10.403 14:40:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:10.403 1+0 records in 00:04:10.403 1+0 records out 00:04:10.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0347866 s, 30.1 MB/s 00:04:10.403 14:40:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.403 14:40:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.403 14:40:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:10.403 14:40:55 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:10.403 14:40:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:10.403 No valid GPT data, bailing 00:04:10.403 14:40:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:10.403 14:40:55 -- scripts/common.sh@394 -- # pt= 00:04:10.403 14:40:55 -- scripts/common.sh@395 -- # return 1 00:04:10.403 14:40:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:10.403 1+0 records in 00:04:10.403 1+0 records out 00:04:10.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00546804 s, 192 MB/s 00:04:10.403 14:40:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.403 14:40:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.403 14:40:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:10.403 14:40:55 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:10.403 14:40:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:10.403 No valid GPT data, bailing 00:04:10.403 14:40:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:10.403 14:40:55 -- scripts/common.sh@394 -- # pt= 00:04:10.403 14:40:55 -- scripts/common.sh@395 -- # return 1 00:04:10.662 14:40:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:10.662 1+0 records in 00:04:10.662 1+0 records out 00:04:10.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596124 s, 176 MB/s 00:04:10.662 14:40:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.662 14:40:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.662 14:40:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:10.662 14:40:55 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:10.662 14:40:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:10.662 No valid GPT data, bailing 00:04:10.662 14:40:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:10.662 14:40:56 -- scripts/common.sh@394 -- # pt= 00:04:10.662 14:40:56 -- scripts/common.sh@395 -- # return 1 00:04:10.662 14:40:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:10.662 1+0 records in 00:04:10.662 1+0 records out 00:04:10.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00633304 s, 166 MB/s 00:04:10.662 14:40:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.662 14:40:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.662 14:40:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:10.662 14:40:56 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:10.662 14:40:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:10.662 No valid GPT data, bailing 00:04:10.662 14:40:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:10.662 14:40:56 -- scripts/common.sh@394 -- # pt= 00:04:10.662 14:40:56 -- scripts/common.sh@395 -- # return 1 00:04:10.662 14:40:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:10.662 1+0 records in 00:04:10.662 1+0 records out 00:04:10.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00527697 s, 199 MB/s 00:04:10.662 14:40:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.662 14:40:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.662 14:40:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:10.662 14:40:56 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:10.662 14:40:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:10.662 No valid GPT data, bailing 00:04:10.662 14:40:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:10.662 14:40:56 -- scripts/common.sh@394 -- # pt= 00:04:10.662 14:40:56 -- scripts/common.sh@395 -- # return 1 00:04:10.662 14:40:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:10.918 1+0 records in 00:04:10.918 1+0 records out 00:04:10.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00560787 s, 187 MB/s 00:04:10.918 14:40:56 -- spdk/autotest.sh@105 -- # sync 00:04:10.918 14:40:56 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:10.918 14:40:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:10.918 14:40:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:12.295 14:40:57 -- spdk/autotest.sh@111 -- # uname -s 00:04:12.553 14:40:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:12.553 14:40:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:12.553 14:40:57 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:12.811 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.378 Hugepages 00:04:13.378 node hugesize free / total 00:04:13.378 node0 1048576kB 0 / 0 00:04:13.378 node0 2048kB 0 / 0 00:04:13.378 00:04:13.378 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.378 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:13.378 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:13.378 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:13.639 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:13.639 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:13.639 14:40:59 -- spdk/autotest.sh@117 -- # uname -s 00:04:13.639 14:40:59 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:13.639 14:40:59 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:13.639 14:40:59 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.539 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.539 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.539 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.812 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.812 14:41:00 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:15.747 14:41:01 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:15.747 14:41:01 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:15.747 14:41:01 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:15.747 14:41:01 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:15.747 14:41:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:15.747 14:41:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:15.747 14:41:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.747 14:41:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:15.747 14:41:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:15.747 14:41:01 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:15.747 14:41:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:15.747 14:41:01 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.264 Waiting for block devices as requested 00:04:16.264 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:16.264 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:16.523 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:16.523 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.789 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:21.789 14:41:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.789 14:41:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:21.789 14:41:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:21.789 14:41:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:21.789 14:41:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.789 14:41:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.789 14:41:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:21.789 14:41:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:21.789 14:41:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.789 14:41:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.789 14:41:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.789 14:41:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1543 -- # continue 00:04:21.789 14:41:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.789 14:41:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:21.789 14:41:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:21.789 14:41:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:21.789 14:41:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.789 14:41:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.789 14:41:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:21.789 14:41:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:21.789 14:41:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.789 14:41:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.789 14:41:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.789 14:41:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1543 -- # continue 00:04:21.789 14:41:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.789 14:41:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:21.789 14:41:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:21.789 14:41:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:21.789 14:41:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.789 14:41:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.789 14:41:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.789 14:41:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1543 -- # continue 00:04:21.789 14:41:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.789 14:41:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:21.789 14:41:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:21.789 14:41:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:21.789 14:41:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:21.789 14:41:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:21.789 14:41:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:21.789 14:41:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:21.789 14:41:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.789 14:41:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.789 14:41:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.789 14:41:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.789 14:41:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.789 14:41:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.789 14:41:07 -- common/autotest_common.sh@1543 -- # continue 00:04:21.789 14:41:07 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:21.789 14:41:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.789 14:41:07 -- common/autotest_common.sh@10 -- # set +x 00:04:21.789 14:41:07 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:21.789 14:41:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.789 14:41:07 -- common/autotest_common.sh@10 -- # set +x 00:04:21.789 14:41:07 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.048 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.614 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.615 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.615 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.615 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.615 14:41:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:22.615 14:41:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.615 14:41:08 -- common/autotest_common.sh@10 -- # set +x 00:04:22.615 14:41:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:22.615 14:41:08 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:22.615 14:41:08 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.615 14:41:08 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:22.615 14:41:08 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:22.615 14:41:08 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:22.615 14:41:08 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:22.615 14:41:08 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:22.873 14:41:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:22.873 14:41:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:22.873 14:41:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.873 14:41:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:22.873 14:41:08 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.873 14:41:08 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:22.873 14:41:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:22.873 14:41:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.873 14:41:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:22.873 14:41:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:22.873 14:41:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.873 14:41:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.873 14:41:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:22.873 14:41:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:22.873 14:41:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.873 14:41:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.873 14:41:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:22.873 14:41:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:22.873 14:41:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.873 14:41:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.873 14:41:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:22.873 14:41:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:22.873 14:41:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.873 14:41:08 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:22.873 14:41:08 -- common/autotest_common.sh@1572 -- # return 0 00:04:22.873 14:41:08 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:22.873 14:41:08 -- common/autotest_common.sh@1580 -- # return 0 00:04:22.873 14:41:08 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:22.873 14:41:08 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:22.873 14:41:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.873 14:41:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.873 14:41:08 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:22.873 14:41:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.873 14:41:08 -- common/autotest_common.sh@10 -- # set +x 00:04:22.873 14:41:08 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:22.874 14:41:08 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.874 14:41:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.874 14:41:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.874 14:41:08 -- common/autotest_common.sh@10 -- # set +x 00:04:22.874 ************************************ 00:04:22.874 START TEST env 00:04:22.874 ************************************ 00:04:22.874 14:41:08 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.874 * Looking for test storage... 00:04:22.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:22.874 14:41:08 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:22.874 14:41:08 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:22.874 14:41:08 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:22.874 14:41:08 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:22.874 14:41:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.874 14:41:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.874 14:41:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.874 14:41:08 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.874 14:41:08 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.874 14:41:08 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.874 14:41:08 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.874 14:41:08 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.874 14:41:08 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.874 14:41:08 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.874 14:41:08 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.874 14:41:08 env -- scripts/common.sh@344 -- # case "$op" in 00:04:22.874 14:41:08 env -- scripts/common.sh@345 -- # : 1 00:04:22.874 14:41:08 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.874 14:41:08 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.874 14:41:08 env -- scripts/common.sh@365 -- # decimal 1 00:04:22.874 14:41:08 env -- scripts/common.sh@353 -- # local d=1 00:04:22.874 14:41:08 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.874 14:41:08 env -- scripts/common.sh@355 -- # echo 1 00:04:22.874 14:41:08 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.874 14:41:08 env -- scripts/common.sh@366 -- # decimal 2 00:04:22.874 14:41:08 env -- scripts/common.sh@353 -- # local d=2 00:04:22.874 14:41:08 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.874 14:41:08 env -- scripts/common.sh@355 -- # echo 2 00:04:22.874 14:41:08 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.874 14:41:08 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.874 14:41:08 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.874 14:41:08 env -- scripts/common.sh@368 -- # return 0 00:04:22.874 14:41:08 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.874 14:41:08 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.874 --rc genhtml_branch_coverage=1 00:04:22.874 --rc genhtml_function_coverage=1 00:04:22.874 --rc genhtml_legend=1 00:04:22.874 --rc geninfo_all_blocks=1 00:04:22.874 --rc geninfo_unexecuted_blocks=1 00:04:22.874 00:04:22.874 ' 00:04:22.874 14:41:08 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.874 --rc genhtml_branch_coverage=1 00:04:22.874 --rc genhtml_function_coverage=1 00:04:22.874 --rc genhtml_legend=1 00:04:22.874 --rc geninfo_all_blocks=1 00:04:22.874 --rc geninfo_unexecuted_blocks=1 00:04:22.874 00:04:22.874 ' 00:04:22.874 14:41:08 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.874 --rc genhtml_branch_coverage=1 00:04:22.874 --rc genhtml_function_coverage=1 00:04:22.874 --rc genhtml_legend=1 00:04:22.874 --rc geninfo_all_blocks=1 00:04:22.874 --rc geninfo_unexecuted_blocks=1 00:04:22.874 00:04:22.874 ' 00:04:22.874 14:41:08 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.874 --rc genhtml_branch_coverage=1 00:04:22.874 --rc genhtml_function_coverage=1 00:04:22.874 --rc genhtml_legend=1 00:04:22.874 --rc geninfo_all_blocks=1 00:04:22.874 --rc geninfo_unexecuted_blocks=1 00:04:22.874 00:04:22.874 ' 00:04:22.874 14:41:08 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:22.874 14:41:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.874 14:41:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.874 14:41:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.874 ************************************ 00:04:22.874 START TEST env_memory 00:04:22.874 ************************************ 00:04:22.874 14:41:08 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:22.874 00:04:22.874 00:04:22.874 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.874 http://cunit.sourceforge.net/ 00:04:22.874 00:04:22.874 00:04:22.874 Suite: memory 00:04:23.132 Test: alloc and free memory map ...[2024-11-17 14:41:08.434278] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:23.132 passed 00:04:23.132 Test: mem map translation ...[2024-11-17 14:41:08.472852] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:23.132 [2024-11-17 14:41:08.472891] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:23.132 [2024-11-17 14:41:08.472956] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:23.132 [2024-11-17 14:41:08.472970] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:23.132 passed 00:04:23.132 Test: mem map registration ...[2024-11-17 14:41:08.540800] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:23.132 [2024-11-17 14:41:08.540835] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:23.132 passed 00:04:23.132 Test: mem map adjacent registrations ...passed 00:04:23.132 00:04:23.132 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.132 suites 1 1 n/a 0 0 00:04:23.132 tests 4 4 4 0 0 00:04:23.132 asserts 152 152 152 0 n/a 00:04:23.132 00:04:23.132 Elapsed time = 0.232 seconds 00:04:23.132 00:04:23.132 real 0m0.264s 00:04:23.132 user 0m0.237s 00:04:23.132 sys 0m0.021s 00:04:23.132 14:41:08 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.132 14:41:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.132 ************************************ 00:04:23.132 END TEST env_memory 00:04:23.132 ************************************ 00:04:23.391 14:41:08 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.391 14:41:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.391 14:41:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.391 14:41:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.391 ************************************ 00:04:23.391 START TEST env_vtophys 00:04:23.391 ************************************ 00:04:23.391 14:41:08 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.391 EAL: lib.eal log level changed from notice to debug 00:04:23.391 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.391 EAL: Detected lcore 1 as core 0 on socket 0 00:04:23.391 EAL: Detected lcore 2 as core 0 on socket 0 00:04:23.391 EAL: Detected lcore 3 as core 0 on socket 0 00:04:23.391 EAL: Detected lcore 4 as core 0 on socket 0 00:04:23.391 EAL: Detected lcore 5 as core 0 on socket 0 00:04:23.391 EAL: Detected lcore 6 as core 0 on socket 0 00:04:23.391 EAL: Detected lcore 7 as core 0 on socket 0 00:04:23.391 EAL: Detected lcore 8 as core 0 on socket 0 00:04:23.391 EAL: Detected lcore 9 as core 0 on socket 0 00:04:23.391 EAL: Maximum logical cores by configuration: 128 00:04:23.391 EAL: Detected CPU lcores: 10 00:04:23.391 EAL: Detected NUMA nodes: 1 00:04:23.391 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:23.391 EAL: Detected shared linkage of DPDK 00:04:23.391 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.391 EAL: Selected IOVA mode 'PA' 00:04:23.391 EAL: Probing VFIO support... 00:04:23.391 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.391 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:23.391 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.391 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.391 EAL: Setting up physically contiguous memory... 00:04:23.391 EAL: Setting maximum number of open files to 524288 00:04:23.391 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.391 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.391 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.391 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.391 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.391 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.391 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.391 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.391 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.391 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.391 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.391 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.391 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.391 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.391 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.391 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.391 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.391 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.391 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.391 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.391 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.391 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.391 EAL: Hugepages will be freed exactly as allocated. 00:04:23.391 EAL: No shared files mode enabled, IPC is disabled 00:04:23.391 EAL: No shared files mode enabled, IPC is disabled 00:04:23.391 EAL: TSC frequency is ~2600000 KHz 00:04:23.391 EAL: Main lcore 0 is ready (tid=7f4eb9c46a40;cpuset=[0]) 00:04:23.391 EAL: Trying to obtain current memory policy. 00:04:23.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.391 EAL: Restoring previous memory policy: 0 00:04:23.391 EAL: request: mp_malloc_sync 00:04:23.391 EAL: No shared files mode enabled, IPC is disabled 00:04:23.391 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.391 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.391 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:23.391 EAL: Mem event callback 'spdk:(nil)' registered 00:04:23.391 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:23.391 00:04:23.391 00:04:23.391 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.391 http://cunit.sourceforge.net/ 00:04:23.391 00:04:23.391 00:04:23.391 Suite: components_suite 00:04:23.650 Test: vtophys_malloc_test ...passed 00:04:23.650 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:23.650 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.650 EAL: Restoring previous memory policy: 4 00:04:23.650 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.650 EAL: request: mp_malloc_sync 00:04:23.650 EAL: No shared files mode enabled, IPC is disabled 00:04:23.650 EAL: Heap on socket 0 was expanded by 4MB 00:04:23.650 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.650 EAL: request: mp_malloc_sync 00:04:23.650 EAL: No shared files mode enabled, IPC is disabled 00:04:23.650 EAL: Heap on socket 0 was shrunk by 4MB 00:04:23.650 EAL: Trying to obtain current memory policy. 00:04:23.650 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.650 EAL: Restoring previous memory policy: 4 00:04:23.650 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.650 EAL: request: mp_malloc_sync 00:04:23.650 EAL: No shared files mode enabled, IPC is disabled 00:04:23.650 EAL: Heap on socket 0 was expanded by 6MB 00:04:23.650 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.650 EAL: request: mp_malloc_sync 00:04:23.650 EAL: No shared files mode enabled, IPC is disabled 00:04:23.650 EAL: Heap on socket 0 was shrunk by 6MB 00:04:23.650 EAL: Trying to obtain current memory policy. 00:04:23.650 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.650 EAL: Restoring previous memory policy: 4 00:04:23.650 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.650 EAL: request: mp_malloc_sync 00:04:23.650 EAL: No shared files mode enabled, IPC is disabled 00:04:23.650 EAL: Heap on socket 0 was expanded by 10MB 00:04:23.650 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.650 EAL: request: mp_malloc_sync 00:04:23.650 EAL: No shared files mode enabled, IPC is disabled 00:04:23.650 EAL: Heap on socket 0 was shrunk by 10MB 00:04:23.945 EAL: Trying to obtain current memory policy. 00:04:23.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.945 EAL: Restoring previous memory policy: 4 00:04:23.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.945 EAL: request: mp_malloc_sync 00:04:23.945 EAL: No shared files mode enabled, IPC is disabled 00:04:23.945 EAL: Heap on socket 0 was expanded by 18MB 00:04:23.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.945 EAL: request: mp_malloc_sync 00:04:23.945 EAL: No shared files mode enabled, IPC is disabled 00:04:23.945 EAL: Heap on socket 0 was shrunk by 18MB 00:04:23.945 EAL: Trying to obtain current memory policy. 00:04:23.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.945 EAL: Restoring previous memory policy: 4 00:04:23.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.945 EAL: request: mp_malloc_sync 00:04:23.945 EAL: No shared files mode enabled, IPC is disabled 00:04:23.945 EAL: Heap on socket 0 was expanded by 34MB 00:04:23.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.945 EAL: request: mp_malloc_sync 00:04:23.945 EAL: No shared files mode enabled, IPC is disabled 00:04:23.945 EAL: Heap on socket 0 was shrunk by 34MB 00:04:23.945 EAL: Trying to obtain current memory policy. 00:04:23.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.945 EAL: Restoring previous memory policy: 4 00:04:23.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.945 EAL: request: mp_malloc_sync 00:04:23.945 EAL: No shared files mode enabled, IPC is disabled 00:04:23.945 EAL: Heap on socket 0 was expanded by 66MB 00:04:23.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.945 EAL: request: mp_malloc_sync 00:04:23.945 EAL: No shared files mode enabled, IPC is disabled 00:04:23.945 EAL: Heap on socket 0 was shrunk by 66MB 00:04:23.945 EAL: Trying to obtain current memory policy. 00:04:23.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.945 EAL: Restoring previous memory policy: 4 00:04:23.945 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.945 EAL: request: mp_malloc_sync 00:04:23.945 EAL: No shared files mode enabled, IPC is disabled 00:04:23.945 EAL: Heap on socket 0 was expanded by 130MB 00:04:24.223 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.223 EAL: request: mp_malloc_sync 00:04:24.223 EAL: No shared files mode enabled, IPC is disabled 00:04:24.223 EAL: Heap on socket 0 was shrunk by 130MB 00:04:24.223 EAL: Trying to obtain current memory policy. 00:04:24.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.482 EAL: Restoring previous memory policy: 4 00:04:24.482 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.482 EAL: request: mp_malloc_sync 00:04:24.482 EAL: No shared files mode enabled, IPC is disabled 00:04:24.482 EAL: Heap on socket 0 was expanded by 258MB 00:04:24.740 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.740 EAL: request: mp_malloc_sync 00:04:24.740 EAL: No shared files mode enabled, IPC is disabled 00:04:24.740 EAL: Heap on socket 0 was shrunk by 258MB 00:04:24.997 EAL: Trying to obtain current memory policy. 00:04:24.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.997 EAL: Restoring previous memory policy: 4 00:04:24.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.997 EAL: request: mp_malloc_sync 00:04:24.997 EAL: No shared files mode enabled, IPC is disabled 00:04:24.997 EAL: Heap on socket 0 was expanded by 514MB 00:04:25.563 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.563 EAL: request: mp_malloc_sync 00:04:25.563 EAL: No shared files mode enabled, IPC is disabled 00:04:25.563 EAL: Heap on socket 0 was shrunk by 514MB 00:04:26.130 EAL: Trying to obtain current memory policy. 00:04:26.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.388 EAL: Restoring previous memory policy: 4 00:04:26.388 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.388 EAL: request: mp_malloc_sync 00:04:26.388 EAL: No shared files mode enabled, IPC is disabled 00:04:26.388 EAL: Heap on socket 0 was expanded by 1026MB 00:04:27.322 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.322 EAL: request: mp_malloc_sync 00:04:27.322 EAL: No shared files mode enabled, IPC is disabled 00:04:27.322 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:28.255 passed 00:04:28.255 00:04:28.255 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.255 suites 1 1 n/a 0 0 00:04:28.255 tests 2 2 2 0 0 00:04:28.255 asserts 5943 5943 5943 0 n/a 00:04:28.255 00:04:28.255 Elapsed time = 4.716 seconds 00:04:28.255 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.255 EAL: request: mp_malloc_sync 00:04:28.255 EAL: No shared files mode enabled, IPC is disabled 00:04:28.255 EAL: Heap on socket 0 was shrunk by 2MB 00:04:28.255 EAL: No shared files mode enabled, IPC is disabled 00:04:28.255 EAL: No shared files mode enabled, IPC is disabled 00:04:28.255 EAL: No shared files mode enabled, IPC is disabled 00:04:28.255 00:04:28.255 real 0m4.966s 00:04:28.255 user 0m4.224s 00:04:28.255 sys 0m0.596s 00:04:28.255 14:41:13 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.255 14:41:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:28.255 ************************************ 00:04:28.255 END TEST env_vtophys 00:04:28.255 ************************************ 00:04:28.255 14:41:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:28.255 14:41:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.255 14:41:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.255 14:41:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.255 ************************************ 00:04:28.255 START TEST env_pci 00:04:28.255 ************************************ 00:04:28.255 14:41:13 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:28.255 00:04:28.255 00:04:28.255 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.255 http://cunit.sourceforge.net/ 00:04:28.255 00:04:28.255 00:04:28.255 Suite: pci 00:04:28.255 Test: pci_hook ...[2024-11-17 14:41:13.708025] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56939 has claimed it 00:04:28.255 passed 00:04:28.255 00:04:28.255 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.255 suites 1 1 n/a 0 0 00:04:28.255 tests 1 1 1 0 0 00:04:28.255 asserts 25 25 25 0 n/a 00:04:28.255 00:04:28.255 Elapsed time = 0.007 seconds 00:04:28.255 EAL: Cannot find device (10000:00:01.0) 00:04:28.255 EAL: Failed to attach device on primary process 00:04:28.255 00:04:28.255 real 0m0.066s 00:04:28.255 user 0m0.026s 00:04:28.255 sys 0m0.039s 00:04:28.255 14:41:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.255 14:41:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:28.255 ************************************ 00:04:28.255 END TEST env_pci 00:04:28.255 ************************************ 00:04:28.255 14:41:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:28.255 14:41:13 env -- env/env.sh@15 -- # uname 00:04:28.255 14:41:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:28.255 14:41:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:28.255 14:41:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:28.255 14:41:13 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:28.255 14:41:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.255 14:41:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.255 ************************************ 00:04:28.255 START TEST env_dpdk_post_init 00:04:28.255 ************************************ 00:04:28.255 14:41:13 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:28.514 EAL: Detected CPU lcores: 10 00:04:28.514 EAL: Detected NUMA nodes: 1 00:04:28.514 EAL: Detected shared linkage of DPDK 00:04:28.514 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.514 EAL: Selected IOVA mode 'PA' 00:04:28.514 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:28.514 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:28.514 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:28.514 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:28.514 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:28.514 Starting DPDK initialization... 00:04:28.514 Starting SPDK post initialization... 00:04:28.514 SPDK NVMe probe 00:04:28.514 Attaching to 0000:00:10.0 00:04:28.514 Attaching to 0000:00:11.0 00:04:28.514 Attaching to 0000:00:12.0 00:04:28.514 Attaching to 0000:00:13.0 00:04:28.514 Attached to 0000:00:10.0 00:04:28.514 Attached to 0000:00:11.0 00:04:28.514 Attached to 0000:00:13.0 00:04:28.514 Attached to 0000:00:12.0 00:04:28.514 Cleaning up... 00:04:28.514 00:04:28.514 real 0m0.228s 00:04:28.514 user 0m0.066s 00:04:28.514 sys 0m0.064s 00:04:28.514 ************************************ 00:04:28.514 END TEST env_dpdk_post_init 00:04:28.514 14:41:14 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.514 14:41:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.514 ************************************ 00:04:28.514 14:41:14 env -- env/env.sh@26 -- # uname 00:04:28.772 14:41:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:28.772 14:41:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.772 14:41:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.772 14:41:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.772 14:41:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.772 ************************************ 00:04:28.772 START TEST env_mem_callbacks 00:04:28.772 ************************************ 00:04:28.772 14:41:14 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.772 EAL: Detected CPU lcores: 10 00:04:28.772 EAL: Detected NUMA nodes: 1 00:04:28.772 EAL: Detected shared linkage of DPDK 00:04:28.772 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.772 EAL: Selected IOVA mode 'PA' 00:04:28.772 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:28.772 00:04:28.772 00:04:28.772 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.772 http://cunit.sourceforge.net/ 00:04:28.772 00:04:28.772 00:04:28.772 Suite: memory 00:04:28.772 Test: test ... 00:04:28.772 register 0x200000200000 2097152 00:04:28.772 malloc 3145728 00:04:28.772 register 0x200000400000 4194304 00:04:28.772 buf 0x2000004fffc0 len 3145728 PASSED 00:04:28.772 malloc 64 00:04:28.772 buf 0x2000004ffec0 len 64 PASSED 00:04:28.772 malloc 4194304 00:04:28.772 register 0x200000800000 6291456 00:04:28.772 buf 0x2000009fffc0 len 4194304 PASSED 00:04:28.772 free 0x2000004fffc0 3145728 00:04:28.772 free 0x2000004ffec0 64 00:04:28.772 unregister 0x200000400000 4194304 PASSED 00:04:28.772 free 0x2000009fffc0 4194304 00:04:28.772 unregister 0x200000800000 6291456 PASSED 00:04:28.772 malloc 8388608 00:04:28.772 register 0x200000400000 10485760 00:04:28.772 buf 0x2000005fffc0 len 8388608 PASSED 00:04:28.772 free 0x2000005fffc0 8388608 00:04:28.772 unregister 0x200000400000 10485760 PASSED 00:04:28.772 passed 00:04:28.773 00:04:28.773 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.773 suites 1 1 n/a 0 0 00:04:28.773 tests 1 1 1 0 0 00:04:28.773 asserts 15 15 15 0 n/a 00:04:28.773 00:04:28.773 Elapsed time = 0.037 seconds 00:04:28.773 00:04:28.773 real 0m0.200s 00:04:28.773 user 0m0.058s 00:04:28.773 sys 0m0.041s 00:04:28.773 14:41:14 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.773 ************************************ 00:04:28.773 END TEST env_mem_callbacks 00:04:28.773 ************************************ 00:04:28.773 14:41:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:28.773 00:04:28.773 real 0m6.054s 00:04:28.773 user 0m4.765s 00:04:28.773 sys 0m0.942s 00:04:28.773 14:41:14 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.773 ************************************ 00:04:28.773 END TEST env 00:04:28.773 ************************************ 00:04:28.773 14:41:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.030 14:41:14 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:29.030 14:41:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.030 14:41:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.030 14:41:14 -- common/autotest_common.sh@10 -- # set +x 00:04:29.030 ************************************ 00:04:29.030 START TEST rpc 00:04:29.030 ************************************ 00:04:29.030 14:41:14 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:29.030 * Looking for test storage... 00:04:29.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:29.030 14:41:14 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.030 14:41:14 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.030 14:41:14 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.030 14:41:14 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.030 14:41:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.030 14:41:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.030 14:41:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.030 14:41:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.030 14:41:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.031 14:41:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.031 14:41:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.031 14:41:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.031 14:41:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.031 14:41:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.031 14:41:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.031 14:41:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.031 14:41:14 rpc -- scripts/common.sh@345 -- # : 1 00:04:29.031 14:41:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.031 14:41:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.031 14:41:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.031 14:41:14 rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.031 14:41:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.031 14:41:14 rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.031 14:41:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.031 14:41:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.031 14:41:14 rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.031 14:41:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.031 14:41:14 rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.031 14:41:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.031 14:41:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.031 14:41:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.031 14:41:14 rpc -- scripts/common.sh@368 -- # return 0 00:04:29.031 14:41:14 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.031 14:41:14 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.031 --rc genhtml_branch_coverage=1 00:04:29.031 --rc genhtml_function_coverage=1 00:04:29.031 --rc genhtml_legend=1 00:04:29.031 --rc geninfo_all_blocks=1 00:04:29.031 --rc geninfo_unexecuted_blocks=1 00:04:29.031 00:04:29.031 ' 00:04:29.031 14:41:14 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.031 --rc genhtml_branch_coverage=1 00:04:29.031 --rc genhtml_function_coverage=1 00:04:29.031 --rc genhtml_legend=1 00:04:29.031 --rc geninfo_all_blocks=1 00:04:29.031 --rc geninfo_unexecuted_blocks=1 00:04:29.031 00:04:29.031 ' 00:04:29.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.031 14:41:14 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.031 --rc genhtml_branch_coverage=1 00:04:29.031 --rc genhtml_function_coverage=1 00:04:29.031 --rc genhtml_legend=1 00:04:29.031 --rc geninfo_all_blocks=1 00:04:29.031 --rc geninfo_unexecuted_blocks=1 00:04:29.031 00:04:29.031 ' 00:04:29.031 14:41:14 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.031 --rc genhtml_branch_coverage=1 00:04:29.031 --rc genhtml_function_coverage=1 00:04:29.031 --rc genhtml_legend=1 00:04:29.031 --rc geninfo_all_blocks=1 00:04:29.031 --rc geninfo_unexecuted_blocks=1 00:04:29.031 00:04:29.031 ' 00:04:29.031 14:41:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57060 00:04:29.031 14:41:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.031 14:41:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57060 00:04:29.031 14:41:14 rpc -- common/autotest_common.sh@835 -- # '[' -z 57060 ']' 00:04:29.031 14:41:14 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.031 14:41:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:29.031 14:41:14 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.031 14:41:14 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.031 14:41:14 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.031 14:41:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.031 [2024-11-17 14:41:14.532506] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:29.031 [2024-11-17 14:41:14.532617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57060 ] 00:04:29.289 [2024-11-17 14:41:14.691426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.289 [2024-11-17 14:41:14.784863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:29.289 [2024-11-17 14:41:14.784915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57060' to capture a snapshot of events at runtime. 00:04:29.289 [2024-11-17 14:41:14.784936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:29.289 [2024-11-17 14:41:14.784945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:29.289 [2024-11-17 14:41:14.784953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57060 for offline analysis/debug. 00:04:29.289 [2024-11-17 14:41:14.785777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.225 14:41:15 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.225 14:41:15 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.225 14:41:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.225 14:41:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.225 14:41:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:30.225 14:41:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:30.225 14:41:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.225 14:41:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.225 14:41:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.225 ************************************ 00:04:30.225 START TEST rpc_integrity 00:04:30.225 ************************************ 00:04:30.225 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:30.225 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.225 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.225 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.225 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.225 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.225 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:30.225 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.225 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.225 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.225 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.225 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.225 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:30.225 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.225 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.225 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.225 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.225 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.225 { 00:04:30.225 "name": "Malloc0", 00:04:30.225 "aliases": [ 00:04:30.225 "dd75a323-762a-4fb5-97b3-edfd633f8e1d" 00:04:30.225 ], 00:04:30.225 "product_name": "Malloc disk", 00:04:30.225 "block_size": 512, 00:04:30.225 "num_blocks": 16384, 00:04:30.225 "uuid": "dd75a323-762a-4fb5-97b3-edfd633f8e1d", 00:04:30.225 "assigned_rate_limits": { 00:04:30.225 "rw_ios_per_sec": 0, 00:04:30.225 "rw_mbytes_per_sec": 0, 00:04:30.225 "r_mbytes_per_sec": 0, 00:04:30.225 "w_mbytes_per_sec": 0 00:04:30.225 }, 00:04:30.225 "claimed": false, 00:04:30.225 "zoned": false, 00:04:30.225 "supported_io_types": { 00:04:30.225 "read": true, 00:04:30.225 "write": true, 00:04:30.225 "unmap": true, 00:04:30.225 "flush": true, 00:04:30.225 "reset": true, 00:04:30.225 "nvme_admin": false, 00:04:30.225 "nvme_io": false, 00:04:30.225 "nvme_io_md": false, 00:04:30.225 "write_zeroes": true, 00:04:30.225 "zcopy": true, 00:04:30.225 "get_zone_info": false, 00:04:30.225 "zone_management": false, 00:04:30.225 "zone_append": false, 00:04:30.225 "compare": false, 00:04:30.225 "compare_and_write": false, 00:04:30.225 "abort": true, 00:04:30.225 "seek_hole": false, 00:04:30.226 "seek_data": false, 00:04:30.226 "copy": true, 00:04:30.226 "nvme_iov_md": false 00:04:30.226 }, 00:04:30.226 "memory_domains": [ 00:04:30.226 { 00:04:30.226 "dma_device_id": "system", 00:04:30.226 "dma_device_type": 1 00:04:30.226 }, 00:04:30.226 { 00:04:30.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.226 "dma_device_type": 2 00:04:30.226 } 00:04:30.226 ], 00:04:30.226 "driver_specific": {} 00:04:30.226 } 00:04:30.226 ]' 00:04:30.226 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:30.226 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.226 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.226 [2024-11-17 14:41:15.537428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:30.226 [2024-11-17 14:41:15.537484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.226 [2024-11-17 14:41:15.537507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:30.226 [2024-11-17 14:41:15.537518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.226 [2024-11-17 14:41:15.539706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.226 [2024-11-17 14:41:15.539746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.226 Passthru0 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.226 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.226 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.226 { 00:04:30.226 "name": "Malloc0", 00:04:30.226 "aliases": [ 00:04:30.226 "dd75a323-762a-4fb5-97b3-edfd633f8e1d" 00:04:30.226 ], 00:04:30.226 "product_name": "Malloc disk", 00:04:30.226 "block_size": 512, 00:04:30.226 "num_blocks": 16384, 00:04:30.226 "uuid": "dd75a323-762a-4fb5-97b3-edfd633f8e1d", 00:04:30.226 "assigned_rate_limits": { 00:04:30.226 "rw_ios_per_sec": 0, 00:04:30.226 "rw_mbytes_per_sec": 0, 00:04:30.226 "r_mbytes_per_sec": 0, 00:04:30.226 "w_mbytes_per_sec": 0 00:04:30.226 }, 00:04:30.226 "claimed": true, 00:04:30.226 "claim_type": "exclusive_write", 00:04:30.226 "zoned": false, 00:04:30.226 "supported_io_types": { 00:04:30.226 "read": true, 00:04:30.226 "write": true, 00:04:30.226 "unmap": true, 00:04:30.226 "flush": true, 00:04:30.226 "reset": true, 00:04:30.226 "nvme_admin": false, 00:04:30.226 "nvme_io": false, 00:04:30.226 "nvme_io_md": false, 00:04:30.226 "write_zeroes": true, 00:04:30.226 "zcopy": true, 00:04:30.226 "get_zone_info": false, 00:04:30.226 "zone_management": false, 00:04:30.226 "zone_append": false, 00:04:30.226 "compare": false, 00:04:30.226 "compare_and_write": false, 00:04:30.226 "abort": true, 00:04:30.226 "seek_hole": false, 00:04:30.226 "seek_data": false, 00:04:30.226 "copy": true, 00:04:30.226 "nvme_iov_md": false 00:04:30.226 }, 00:04:30.226 "memory_domains": [ 00:04:30.226 { 00:04:30.226 "dma_device_id": "system", 00:04:30.226 "dma_device_type": 1 00:04:30.226 }, 00:04:30.226 { 00:04:30.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.226 "dma_device_type": 2 00:04:30.226 } 00:04:30.226 ], 00:04:30.226 "driver_specific": {} 00:04:30.226 }, 00:04:30.226 { 00:04:30.226 "name": "Passthru0", 00:04:30.226 "aliases": [ 00:04:30.226 "7e078951-a2f2-5bda-a208-23dfa885328b" 00:04:30.226 ], 00:04:30.226 "product_name": "passthru", 00:04:30.226 "block_size": 512, 00:04:30.226 "num_blocks": 16384, 00:04:30.226 "uuid": "7e078951-a2f2-5bda-a208-23dfa885328b", 00:04:30.226 "assigned_rate_limits": { 00:04:30.226 "rw_ios_per_sec": 0, 00:04:30.226 "rw_mbytes_per_sec": 0, 00:04:30.226 "r_mbytes_per_sec": 0, 00:04:30.226 "w_mbytes_per_sec": 0 00:04:30.226 }, 00:04:30.226 "claimed": false, 00:04:30.226 "zoned": false, 00:04:30.226 "supported_io_types": { 00:04:30.226 "read": true, 00:04:30.226 "write": true, 00:04:30.226 "unmap": true, 00:04:30.226 "flush": true, 00:04:30.226 "reset": true, 00:04:30.226 "nvme_admin": false, 00:04:30.226 "nvme_io": false, 00:04:30.226 "nvme_io_md": false, 00:04:30.226 "write_zeroes": true, 00:04:30.226 "zcopy": true, 00:04:30.226 "get_zone_info": false, 00:04:30.226 "zone_management": false, 00:04:30.226 "zone_append": false, 00:04:30.226 "compare": false, 00:04:30.226 "compare_and_write": false, 00:04:30.226 "abort": true, 00:04:30.226 "seek_hole": false, 00:04:30.226 "seek_data": false, 00:04:30.226 "copy": true, 00:04:30.226 "nvme_iov_md": false 00:04:30.226 }, 00:04:30.226 "memory_domains": [ 00:04:30.226 { 00:04:30.226 "dma_device_id": "system", 00:04:30.226 "dma_device_type": 1 00:04:30.226 }, 00:04:30.226 { 00:04:30.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.226 "dma_device_type": 2 00:04:30.226 } 00:04:30.226 ], 00:04:30.226 "driver_specific": { 00:04:30.226 "passthru": { 00:04:30.226 "name": "Passthru0", 00:04:30.226 "base_bdev_name": "Malloc0" 00:04:30.226 } 00:04:30.226 } 00:04:30.226 } 00:04:30.226 ]' 00:04:30.226 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.226 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.226 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.226 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.226 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.226 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.226 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.226 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.226 ************************************ 00:04:30.227 END TEST rpc_integrity 00:04:30.227 ************************************ 00:04:30.227 14:41:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.227 00:04:30.227 real 0m0.261s 00:04:30.227 user 0m0.137s 00:04:30.227 sys 0m0.035s 00:04:30.227 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.227 14:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.227 14:41:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:30.227 14:41:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.227 14:41:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.227 14:41:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.227 ************************************ 00:04:30.227 START TEST rpc_plugins 00:04:30.227 ************************************ 00:04:30.227 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:30.227 14:41:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:30.227 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.227 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.227 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.227 14:41:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:30.227 14:41:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:30.227 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.227 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.485 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.485 14:41:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:30.485 { 00:04:30.485 "name": "Malloc1", 00:04:30.485 "aliases": [ 00:04:30.485 "1db58fdb-7296-48e7-8ef4-12e676ce4c25" 00:04:30.485 ], 00:04:30.485 "product_name": "Malloc disk", 00:04:30.485 "block_size": 4096, 00:04:30.485 "num_blocks": 256, 00:04:30.485 "uuid": "1db58fdb-7296-48e7-8ef4-12e676ce4c25", 00:04:30.485 "assigned_rate_limits": { 00:04:30.485 "rw_ios_per_sec": 0, 00:04:30.485 "rw_mbytes_per_sec": 0, 00:04:30.485 "r_mbytes_per_sec": 0, 00:04:30.485 "w_mbytes_per_sec": 0 00:04:30.485 }, 00:04:30.485 "claimed": false, 00:04:30.485 "zoned": false, 00:04:30.485 "supported_io_types": { 00:04:30.485 "read": true, 00:04:30.485 "write": true, 00:04:30.485 "unmap": true, 00:04:30.485 "flush": true, 00:04:30.485 "reset": true, 00:04:30.485 "nvme_admin": false, 00:04:30.485 "nvme_io": false, 00:04:30.485 "nvme_io_md": false, 00:04:30.485 "write_zeroes": true, 00:04:30.485 "zcopy": true, 00:04:30.485 "get_zone_info": false, 00:04:30.485 "zone_management": false, 00:04:30.485 "zone_append": false, 00:04:30.485 "compare": false, 00:04:30.485 "compare_and_write": false, 00:04:30.485 "abort": true, 00:04:30.485 "seek_hole": false, 00:04:30.485 "seek_data": false, 00:04:30.485 "copy": true, 00:04:30.485 "nvme_iov_md": false 00:04:30.485 }, 00:04:30.485 "memory_domains": [ 00:04:30.485 { 00:04:30.485 "dma_device_id": "system", 00:04:30.485 "dma_device_type": 1 00:04:30.485 }, 00:04:30.485 { 00:04:30.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.485 "dma_device_type": 2 00:04:30.485 } 00:04:30.485 ], 00:04:30.485 "driver_specific": {} 00:04:30.485 } 00:04:30.485 ]' 00:04:30.485 14:41:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:30.485 14:41:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:30.485 14:41:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:30.485 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.485 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.485 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.485 14:41:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:30.485 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.485 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.485 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.485 14:41:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:30.485 14:41:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:30.485 ************************************ 00:04:30.485 END TEST rpc_plugins 00:04:30.485 ************************************ 00:04:30.485 14:41:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:30.485 00:04:30.485 real 0m0.122s 00:04:30.485 user 0m0.063s 00:04:30.485 sys 0m0.018s 00:04:30.485 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.485 14:41:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.485 14:41:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:30.485 14:41:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.485 14:41:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.485 14:41:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.485 ************************************ 00:04:30.485 START TEST rpc_trace_cmd_test 00:04:30.485 ************************************ 00:04:30.485 14:41:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:30.485 14:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:30.485 14:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:30.485 14:41:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.485 14:41:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.485 14:41:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.485 14:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:30.485 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57060", 00:04:30.485 "tpoint_group_mask": "0x8", 00:04:30.485 "iscsi_conn": { 00:04:30.485 "mask": "0x2", 00:04:30.485 "tpoint_mask": "0x0" 00:04:30.485 }, 00:04:30.485 "scsi": { 00:04:30.485 "mask": "0x4", 00:04:30.485 "tpoint_mask": "0x0" 00:04:30.485 }, 00:04:30.485 "bdev": { 00:04:30.485 "mask": "0x8", 00:04:30.485 "tpoint_mask": "0xffffffffffffffff" 00:04:30.485 }, 00:04:30.485 "nvmf_rdma": { 00:04:30.485 "mask": "0x10", 00:04:30.485 "tpoint_mask": "0x0" 00:04:30.485 }, 00:04:30.485 "nvmf_tcp": { 00:04:30.485 "mask": "0x20", 00:04:30.485 "tpoint_mask": "0x0" 00:04:30.485 }, 00:04:30.485 "ftl": { 00:04:30.485 "mask": "0x40", 00:04:30.485 "tpoint_mask": "0x0" 00:04:30.486 }, 00:04:30.486 "blobfs": { 00:04:30.486 "mask": "0x80", 00:04:30.486 "tpoint_mask": "0x0" 00:04:30.486 }, 00:04:30.486 "dsa": { 00:04:30.486 "mask": "0x200", 00:04:30.486 "tpoint_mask": "0x0" 00:04:30.486 }, 00:04:30.486 "thread": { 00:04:30.486 "mask": "0x400", 00:04:30.486 "tpoint_mask": "0x0" 00:04:30.486 }, 00:04:30.486 "nvme_pcie": { 00:04:30.486 "mask": "0x800", 00:04:30.486 "tpoint_mask": "0x0" 00:04:30.486 }, 00:04:30.486 "iaa": { 00:04:30.486 "mask": "0x1000", 00:04:30.486 "tpoint_mask": "0x0" 00:04:30.486 }, 00:04:30.486 "nvme_tcp": { 00:04:30.486 "mask": "0x2000", 00:04:30.486 "tpoint_mask": "0x0" 00:04:30.486 }, 00:04:30.486 "bdev_nvme": { 00:04:30.486 "mask": "0x4000", 00:04:30.486 "tpoint_mask": "0x0" 00:04:30.486 }, 00:04:30.486 "sock": { 00:04:30.486 "mask": "0x8000", 00:04:30.486 "tpoint_mask": "0x0" 00:04:30.486 }, 00:04:30.486 "blob": { 00:04:30.486 "mask": "0x10000", 00:04:30.486 "tpoint_mask": "0x0" 00:04:30.486 }, 00:04:30.486 "bdev_raid": { 00:04:30.486 "mask": "0x20000", 00:04:30.486 "tpoint_mask": "0x0" 00:04:30.486 }, 00:04:30.486 "scheduler": { 00:04:30.486 "mask": "0x40000", 00:04:30.486 "tpoint_mask": "0x0" 00:04:30.486 } 00:04:30.486 }' 00:04:30.486 14:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:30.486 14:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:30.486 14:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:30.486 14:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:30.486 14:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:30.486 14:41:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:30.486 14:41:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:30.744 14:41:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:30.744 14:41:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:30.744 ************************************ 00:04:30.744 END TEST rpc_trace_cmd_test 00:04:30.744 ************************************ 00:04:30.744 14:41:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:30.744 00:04:30.744 real 0m0.168s 00:04:30.744 user 0m0.136s 00:04:30.744 sys 0m0.023s 00:04:30.744 14:41:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.744 14:41:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.744 14:41:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:30.744 14:41:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:30.744 14:41:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:30.744 14:41:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.744 14:41:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.744 14:41:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.744 ************************************ 00:04:30.744 START TEST rpc_daemon_integrity 00:04:30.744 ************************************ 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.744 { 00:04:30.744 "name": "Malloc2", 00:04:30.744 "aliases": [ 00:04:30.744 "1e039613-cff0-403d-96a3-7ce6e90c0925" 00:04:30.744 ], 00:04:30.744 "product_name": "Malloc disk", 00:04:30.744 "block_size": 512, 00:04:30.744 "num_blocks": 16384, 00:04:30.744 "uuid": "1e039613-cff0-403d-96a3-7ce6e90c0925", 00:04:30.744 "assigned_rate_limits": { 00:04:30.744 "rw_ios_per_sec": 0, 00:04:30.744 "rw_mbytes_per_sec": 0, 00:04:30.744 "r_mbytes_per_sec": 0, 00:04:30.744 "w_mbytes_per_sec": 0 00:04:30.744 }, 00:04:30.744 "claimed": false, 00:04:30.744 "zoned": false, 00:04:30.744 "supported_io_types": { 00:04:30.744 "read": true, 00:04:30.744 "write": true, 00:04:30.744 "unmap": true, 00:04:30.744 "flush": true, 00:04:30.744 "reset": true, 00:04:30.744 "nvme_admin": false, 00:04:30.744 "nvme_io": false, 00:04:30.744 "nvme_io_md": false, 00:04:30.744 "write_zeroes": true, 00:04:30.744 "zcopy": true, 00:04:30.744 "get_zone_info": false, 00:04:30.744 "zone_management": false, 00:04:30.744 "zone_append": false, 00:04:30.744 "compare": false, 00:04:30.744 "compare_and_write": false, 00:04:30.744 "abort": true, 00:04:30.744 "seek_hole": false, 00:04:30.744 "seek_data": false, 00:04:30.744 "copy": true, 00:04:30.744 "nvme_iov_md": false 00:04:30.744 }, 00:04:30.744 "memory_domains": [ 00:04:30.744 { 00:04:30.744 "dma_device_id": "system", 00:04:30.744 "dma_device_type": 1 00:04:30.744 }, 00:04:30.744 { 00:04:30.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.744 "dma_device_type": 2 00:04:30.744 } 00:04:30.744 ], 00:04:30.744 "driver_specific": {} 00:04:30.744 } 00:04:30.744 ]' 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.744 [2024-11-17 14:41:16.243722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:30.744 [2024-11-17 14:41:16.243772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.744 [2024-11-17 14:41:16.243790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:30.744 [2024-11-17 14:41:16.243800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.744 [2024-11-17 14:41:16.245879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.744 [2024-11-17 14:41:16.245915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.744 Passthru0 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.744 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.744 { 00:04:30.744 "name": "Malloc2", 00:04:30.744 "aliases": [ 00:04:30.744 "1e039613-cff0-403d-96a3-7ce6e90c0925" 00:04:30.744 ], 00:04:30.744 "product_name": "Malloc disk", 00:04:30.744 "block_size": 512, 00:04:30.744 "num_blocks": 16384, 00:04:30.744 "uuid": "1e039613-cff0-403d-96a3-7ce6e90c0925", 00:04:30.744 "assigned_rate_limits": { 00:04:30.744 "rw_ios_per_sec": 0, 00:04:30.744 "rw_mbytes_per_sec": 0, 00:04:30.744 "r_mbytes_per_sec": 0, 00:04:30.744 "w_mbytes_per_sec": 0 00:04:30.744 }, 00:04:30.744 "claimed": true, 00:04:30.744 "claim_type": "exclusive_write", 00:04:30.744 "zoned": false, 00:04:30.744 "supported_io_types": { 00:04:30.744 "read": true, 00:04:30.744 "write": true, 00:04:30.744 "unmap": true, 00:04:30.744 "flush": true, 00:04:30.744 "reset": true, 00:04:30.744 "nvme_admin": false, 00:04:30.744 "nvme_io": false, 00:04:30.744 "nvme_io_md": false, 00:04:30.744 "write_zeroes": true, 00:04:30.744 "zcopy": true, 00:04:30.744 "get_zone_info": false, 00:04:30.744 "zone_management": false, 00:04:30.744 "zone_append": false, 00:04:30.744 "compare": false, 00:04:30.744 "compare_and_write": false, 00:04:30.744 "abort": true, 00:04:30.744 "seek_hole": false, 00:04:30.744 "seek_data": false, 00:04:30.744 "copy": true, 00:04:30.744 "nvme_iov_md": false 00:04:30.744 }, 00:04:30.744 "memory_domains": [ 00:04:30.744 { 00:04:30.744 "dma_device_id": "system", 00:04:30.744 "dma_device_type": 1 00:04:30.744 }, 00:04:30.744 { 00:04:30.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.744 "dma_device_type": 2 00:04:30.744 } 00:04:30.744 ], 00:04:30.744 "driver_specific": {} 00:04:30.744 }, 00:04:30.744 { 00:04:30.744 "name": "Passthru0", 00:04:30.744 "aliases": [ 00:04:30.744 "3971c74d-befe-5a4c-97d5-679e9a411406" 00:04:30.744 ], 00:04:30.744 "product_name": "passthru", 00:04:30.744 "block_size": 512, 00:04:30.744 "num_blocks": 16384, 00:04:30.744 "uuid": "3971c74d-befe-5a4c-97d5-679e9a411406", 00:04:30.744 "assigned_rate_limits": { 00:04:30.744 "rw_ios_per_sec": 0, 00:04:30.744 "rw_mbytes_per_sec": 0, 00:04:30.744 "r_mbytes_per_sec": 0, 00:04:30.744 "w_mbytes_per_sec": 0 00:04:30.744 }, 00:04:30.744 "claimed": false, 00:04:30.744 "zoned": false, 00:04:30.744 "supported_io_types": { 00:04:30.744 "read": true, 00:04:30.744 "write": true, 00:04:30.744 "unmap": true, 00:04:30.744 "flush": true, 00:04:30.744 "reset": true, 00:04:30.744 "nvme_admin": false, 00:04:30.744 "nvme_io": false, 00:04:30.744 "nvme_io_md": false, 00:04:30.744 "write_zeroes": true, 00:04:30.744 "zcopy": true, 00:04:30.744 "get_zone_info": false, 00:04:30.744 "zone_management": false, 00:04:30.744 "zone_append": false, 00:04:30.744 "compare": false, 00:04:30.744 "compare_and_write": false, 00:04:30.744 "abort": true, 00:04:30.744 "seek_hole": false, 00:04:30.744 "seek_data": false, 00:04:30.744 "copy": true, 00:04:30.744 "nvme_iov_md": false 00:04:30.744 }, 00:04:30.744 "memory_domains": [ 00:04:30.744 { 00:04:30.745 "dma_device_id": "system", 00:04:30.745 "dma_device_type": 1 00:04:30.745 }, 00:04:30.745 { 00:04:30.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.745 "dma_device_type": 2 00:04:30.745 } 00:04:30.745 ], 00:04:30.745 "driver_specific": { 00:04:30.745 "passthru": { 00:04:30.745 "name": "Passthru0", 00:04:30.745 "base_bdev_name": "Malloc2" 00:04:30.745 } 00:04:30.745 } 00:04:30.745 } 00:04:30.745 ]' 00:04:30.745 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:31.003 ************************************ 00:04:31.003 END TEST rpc_daemon_integrity 00:04:31.003 ************************************ 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.003 00:04:31.003 real 0m0.249s 00:04:31.003 user 0m0.129s 00:04:31.003 sys 0m0.037s 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.003 14:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.003 14:41:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:31.003 14:41:16 rpc -- rpc/rpc.sh@84 -- # killprocess 57060 00:04:31.003 14:41:16 rpc -- common/autotest_common.sh@954 -- # '[' -z 57060 ']' 00:04:31.003 14:41:16 rpc -- common/autotest_common.sh@958 -- # kill -0 57060 00:04:31.003 14:41:16 rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.003 14:41:16 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.003 14:41:16 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57060 00:04:31.003 killing process with pid 57060 00:04:31.003 14:41:16 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.003 14:41:16 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.003 14:41:16 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57060' 00:04:31.003 14:41:16 rpc -- common/autotest_common.sh@973 -- # kill 57060 00:04:31.003 14:41:16 rpc -- common/autotest_common.sh@978 -- # wait 57060 00:04:32.381 00:04:32.381 real 0m3.363s 00:04:32.381 user 0m3.743s 00:04:32.381 sys 0m0.647s 00:04:32.381 14:41:17 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.381 14:41:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.381 ************************************ 00:04:32.381 END TEST rpc 00:04:32.381 ************************************ 00:04:32.381 14:41:17 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:32.381 14:41:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.381 14:41:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.381 14:41:17 -- common/autotest_common.sh@10 -- # set +x 00:04:32.381 ************************************ 00:04:32.381 START TEST skip_rpc 00:04:32.381 ************************************ 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:32.381 * Looking for test storage... 00:04:32.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.381 14:41:17 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.381 --rc genhtml_branch_coverage=1 00:04:32.381 --rc genhtml_function_coverage=1 00:04:32.381 --rc genhtml_legend=1 00:04:32.381 --rc geninfo_all_blocks=1 00:04:32.381 --rc geninfo_unexecuted_blocks=1 00:04:32.381 00:04:32.381 ' 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.381 --rc genhtml_branch_coverage=1 00:04:32.381 --rc genhtml_function_coverage=1 00:04:32.381 --rc genhtml_legend=1 00:04:32.381 --rc geninfo_all_blocks=1 00:04:32.381 --rc geninfo_unexecuted_blocks=1 00:04:32.381 00:04:32.381 ' 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.381 --rc genhtml_branch_coverage=1 00:04:32.381 --rc genhtml_function_coverage=1 00:04:32.381 --rc genhtml_legend=1 00:04:32.381 --rc geninfo_all_blocks=1 00:04:32.381 --rc geninfo_unexecuted_blocks=1 00:04:32.381 00:04:32.381 ' 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.381 --rc genhtml_branch_coverage=1 00:04:32.381 --rc genhtml_function_coverage=1 00:04:32.381 --rc genhtml_legend=1 00:04:32.381 --rc geninfo_all_blocks=1 00:04:32.381 --rc geninfo_unexecuted_blocks=1 00:04:32.381 00:04:32.381 ' 00:04:32.381 14:41:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.381 14:41:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:32.381 14:41:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.381 14:41:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.381 ************************************ 00:04:32.381 START TEST skip_rpc 00:04:32.381 ************************************ 00:04:32.381 14:41:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:32.381 14:41:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57273 00:04:32.381 14:41:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.381 14:41:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:32.381 14:41:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:32.639 [2024-11-17 14:41:17.962779] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:32.639 [2024-11-17 14:41:17.962875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57273 ] 00:04:32.639 [2024-11-17 14:41:18.117707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.897 [2024-11-17 14:41:18.213381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:38.205 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57273 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57273 ']' 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57273 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57273 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.206 killing process with pid 57273 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57273' 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57273 00:04:38.206 14:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57273 00:04:38.772 00:04:38.772 real 0m6.202s 00:04:38.772 user 0m5.838s 00:04:38.772 sys 0m0.257s 00:04:38.772 ************************************ 00:04:38.772 END TEST skip_rpc 00:04:38.772 ************************************ 00:04:38.772 14:41:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.772 14:41:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.772 14:41:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:38.772 14:41:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.772 14:41:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.772 14:41:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.772 ************************************ 00:04:38.772 START TEST skip_rpc_with_json 00:04:38.772 ************************************ 00:04:38.772 14:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:38.772 14:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:38.772 14:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57371 00:04:38.772 14:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.772 14:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57371 00:04:38.772 14:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57371 ']' 00:04:38.772 14:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.772 14:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.772 14:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.772 14:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.772 14:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.772 14:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.772 [2024-11-17 14:41:24.216143] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:38.772 [2024-11-17 14:41:24.216784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57371 ] 00:04:39.031 [2024-11-17 14:41:24.371560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.031 [2024-11-17 14:41:24.446700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.597 [2024-11-17 14:41:25.039916] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:39.597 request: 00:04:39.597 { 00:04:39.597 "trtype": "tcp", 00:04:39.597 "method": "nvmf_get_transports", 00:04:39.597 "req_id": 1 00:04:39.597 } 00:04:39.597 Got JSON-RPC error response 00:04:39.597 response: 00:04:39.597 { 00:04:39.597 "code": -19, 00:04:39.597 "message": "No such device" 00:04:39.597 } 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.597 [2024-11-17 14:41:25.052014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.597 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.855 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.855 14:41:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:39.855 { 00:04:39.855 "subsystems": [ 00:04:39.855 { 00:04:39.855 "subsystem": "fsdev", 00:04:39.855 "config": [ 00:04:39.855 { 00:04:39.855 "method": "fsdev_set_opts", 00:04:39.855 "params": { 00:04:39.855 "fsdev_io_pool_size": 65535, 00:04:39.855 "fsdev_io_cache_size": 256 00:04:39.855 } 00:04:39.855 } 00:04:39.855 ] 00:04:39.855 }, 00:04:39.855 { 00:04:39.855 "subsystem": "keyring", 00:04:39.855 "config": [] 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "iobuf", 00:04:39.856 "config": [ 00:04:39.856 { 00:04:39.856 "method": "iobuf_set_options", 00:04:39.856 "params": { 00:04:39.856 "small_pool_count": 8192, 00:04:39.856 "large_pool_count": 1024, 00:04:39.856 "small_bufsize": 8192, 00:04:39.856 "large_bufsize": 135168, 00:04:39.856 "enable_numa": false 00:04:39.856 } 00:04:39.856 } 00:04:39.856 ] 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "sock", 00:04:39.856 "config": [ 00:04:39.856 { 00:04:39.856 "method": "sock_set_default_impl", 00:04:39.856 "params": { 00:04:39.856 "impl_name": "posix" 00:04:39.856 } 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "method": "sock_impl_set_options", 00:04:39.856 "params": { 00:04:39.856 "impl_name": "ssl", 00:04:39.856 "recv_buf_size": 4096, 00:04:39.856 "send_buf_size": 4096, 00:04:39.856 "enable_recv_pipe": true, 00:04:39.856 "enable_quickack": false, 00:04:39.856 "enable_placement_id": 0, 00:04:39.856 "enable_zerocopy_send_server": true, 00:04:39.856 "enable_zerocopy_send_client": false, 00:04:39.856 "zerocopy_threshold": 0, 00:04:39.856 "tls_version": 0, 00:04:39.856 "enable_ktls": false 00:04:39.856 } 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "method": "sock_impl_set_options", 00:04:39.856 "params": { 00:04:39.856 "impl_name": "posix", 00:04:39.856 "recv_buf_size": 2097152, 00:04:39.856 "send_buf_size": 2097152, 00:04:39.856 "enable_recv_pipe": true, 00:04:39.856 "enable_quickack": false, 00:04:39.856 "enable_placement_id": 0, 00:04:39.856 "enable_zerocopy_send_server": true, 00:04:39.856 "enable_zerocopy_send_client": false, 00:04:39.856 "zerocopy_threshold": 0, 00:04:39.856 "tls_version": 0, 00:04:39.856 "enable_ktls": false 00:04:39.856 } 00:04:39.856 } 00:04:39.856 ] 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "vmd", 00:04:39.856 "config": [] 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "accel", 00:04:39.856 "config": [ 00:04:39.856 { 00:04:39.856 "method": "accel_set_options", 00:04:39.856 "params": { 00:04:39.856 "small_cache_size": 128, 00:04:39.856 "large_cache_size": 16, 00:04:39.856 "task_count": 2048, 00:04:39.856 "sequence_count": 2048, 00:04:39.856 "buf_count": 2048 00:04:39.856 } 00:04:39.856 } 00:04:39.856 ] 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "bdev", 00:04:39.856 "config": [ 00:04:39.856 { 00:04:39.856 "method": "bdev_set_options", 00:04:39.856 "params": { 00:04:39.856 "bdev_io_pool_size": 65535, 00:04:39.856 "bdev_io_cache_size": 256, 00:04:39.856 "bdev_auto_examine": true, 00:04:39.856 "iobuf_small_cache_size": 128, 00:04:39.856 "iobuf_large_cache_size": 16 00:04:39.856 } 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "method": "bdev_raid_set_options", 00:04:39.856 "params": { 00:04:39.856 "process_window_size_kb": 1024, 00:04:39.856 "process_max_bandwidth_mb_sec": 0 00:04:39.856 } 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "method": "bdev_iscsi_set_options", 00:04:39.856 "params": { 00:04:39.856 "timeout_sec": 30 00:04:39.856 } 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "method": "bdev_nvme_set_options", 00:04:39.856 "params": { 00:04:39.856 "action_on_timeout": "none", 00:04:39.856 "timeout_us": 0, 00:04:39.856 "timeout_admin_us": 0, 00:04:39.856 "keep_alive_timeout_ms": 10000, 00:04:39.856 "arbitration_burst": 0, 00:04:39.856 "low_priority_weight": 0, 00:04:39.856 "medium_priority_weight": 0, 00:04:39.856 "high_priority_weight": 0, 00:04:39.856 "nvme_adminq_poll_period_us": 10000, 00:04:39.856 "nvme_ioq_poll_period_us": 0, 00:04:39.856 "io_queue_requests": 0, 00:04:39.856 "delay_cmd_submit": true, 00:04:39.856 "transport_retry_count": 4, 00:04:39.856 "bdev_retry_count": 3, 00:04:39.856 "transport_ack_timeout": 0, 00:04:39.856 "ctrlr_loss_timeout_sec": 0, 00:04:39.856 "reconnect_delay_sec": 0, 00:04:39.856 "fast_io_fail_timeout_sec": 0, 00:04:39.856 "disable_auto_failback": false, 00:04:39.856 "generate_uuids": false, 00:04:39.856 "transport_tos": 0, 00:04:39.856 "nvme_error_stat": false, 00:04:39.856 "rdma_srq_size": 0, 00:04:39.856 "io_path_stat": false, 00:04:39.856 "allow_accel_sequence": false, 00:04:39.856 "rdma_max_cq_size": 0, 00:04:39.856 "rdma_cm_event_timeout_ms": 0, 00:04:39.856 "dhchap_digests": [ 00:04:39.856 "sha256", 00:04:39.856 "sha384", 00:04:39.856 "sha512" 00:04:39.856 ], 00:04:39.856 "dhchap_dhgroups": [ 00:04:39.856 "null", 00:04:39.856 "ffdhe2048", 00:04:39.856 "ffdhe3072", 00:04:39.856 "ffdhe4096", 00:04:39.856 "ffdhe6144", 00:04:39.856 "ffdhe8192" 00:04:39.856 ] 00:04:39.856 } 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "method": "bdev_nvme_set_hotplug", 00:04:39.856 "params": { 00:04:39.856 "period_us": 100000, 00:04:39.856 "enable": false 00:04:39.856 } 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "method": "bdev_wait_for_examine" 00:04:39.856 } 00:04:39.856 ] 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "scsi", 00:04:39.856 "config": null 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "scheduler", 00:04:39.856 "config": [ 00:04:39.856 { 00:04:39.856 "method": "framework_set_scheduler", 00:04:39.856 "params": { 00:04:39.856 "name": "static" 00:04:39.856 } 00:04:39.856 } 00:04:39.856 ] 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "vhost_scsi", 00:04:39.856 "config": [] 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "vhost_blk", 00:04:39.856 "config": [] 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "ublk", 00:04:39.856 "config": [] 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "nbd", 00:04:39.856 "config": [] 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "nvmf", 00:04:39.856 "config": [ 00:04:39.856 { 00:04:39.856 "method": "nvmf_set_config", 00:04:39.856 "params": { 00:04:39.856 "discovery_filter": "match_any", 00:04:39.856 "admin_cmd_passthru": { 00:04:39.856 "identify_ctrlr": false 00:04:39.856 }, 00:04:39.856 "dhchap_digests": [ 00:04:39.856 "sha256", 00:04:39.856 "sha384", 00:04:39.856 "sha512" 00:04:39.856 ], 00:04:39.856 "dhchap_dhgroups": [ 00:04:39.856 "null", 00:04:39.856 "ffdhe2048", 00:04:39.856 "ffdhe3072", 00:04:39.856 "ffdhe4096", 00:04:39.856 "ffdhe6144", 00:04:39.856 "ffdhe8192" 00:04:39.856 ] 00:04:39.856 } 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "method": "nvmf_set_max_subsystems", 00:04:39.856 "params": { 00:04:39.856 "max_subsystems": 1024 00:04:39.856 } 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "method": "nvmf_set_crdt", 00:04:39.856 "params": { 00:04:39.856 "crdt1": 0, 00:04:39.856 "crdt2": 0, 00:04:39.856 "crdt3": 0 00:04:39.856 } 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "method": "nvmf_create_transport", 00:04:39.856 "params": { 00:04:39.856 "trtype": "TCP", 00:04:39.856 "max_queue_depth": 128, 00:04:39.856 "max_io_qpairs_per_ctrlr": 127, 00:04:39.856 "in_capsule_data_size": 4096, 00:04:39.856 "max_io_size": 131072, 00:04:39.856 "io_unit_size": 131072, 00:04:39.856 "max_aq_depth": 128, 00:04:39.856 "num_shared_buffers": 511, 00:04:39.856 "buf_cache_size": 4294967295, 00:04:39.856 "dif_insert_or_strip": false, 00:04:39.856 "zcopy": false, 00:04:39.856 "c2h_success": true, 00:04:39.856 "sock_priority": 0, 00:04:39.856 "abort_timeout_sec": 1, 00:04:39.856 "ack_timeout": 0, 00:04:39.856 "data_wr_pool_size": 0 00:04:39.856 } 00:04:39.856 } 00:04:39.856 ] 00:04:39.856 }, 00:04:39.856 { 00:04:39.856 "subsystem": "iscsi", 00:04:39.856 "config": [ 00:04:39.856 { 00:04:39.856 "method": "iscsi_set_options", 00:04:39.856 "params": { 00:04:39.856 "node_base": "iqn.2016-06.io.spdk", 00:04:39.856 "max_sessions": 128, 00:04:39.856 "max_connections_per_session": 2, 00:04:39.856 "max_queue_depth": 64, 00:04:39.856 "default_time2wait": 2, 00:04:39.856 "default_time2retain": 20, 00:04:39.856 "first_burst_length": 8192, 00:04:39.856 "immediate_data": true, 00:04:39.856 "allow_duplicated_isid": false, 00:04:39.856 "error_recovery_level": 0, 00:04:39.856 "nop_timeout": 60, 00:04:39.856 "nop_in_interval": 30, 00:04:39.856 "disable_chap": false, 00:04:39.856 "require_chap": false, 00:04:39.856 "mutual_chap": false, 00:04:39.856 "chap_group": 0, 00:04:39.856 "max_large_datain_per_connection": 64, 00:04:39.857 "max_r2t_per_connection": 4, 00:04:39.857 "pdu_pool_size": 36864, 00:04:39.857 "immediate_data_pool_size": 16384, 00:04:39.857 "data_out_pool_size": 2048 00:04:39.857 } 00:04:39.857 } 00:04:39.857 ] 00:04:39.857 } 00:04:39.857 ] 00:04:39.857 } 00:04:39.857 14:41:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:39.857 14:41:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57371 00:04:39.857 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57371 ']' 00:04:39.857 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57371 00:04:39.857 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:39.857 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.857 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57371 00:04:39.857 killing process with pid 57371 00:04:39.857 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.857 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.857 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57371' 00:04:39.857 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57371 00:04:39.857 14:41:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57371 00:04:41.232 14:41:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57405 00:04:41.232 14:41:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:41.232 14:41:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:46.522 14:41:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57405 00:04:46.522 14:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57405 ']' 00:04:46.522 14:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57405 00:04:46.522 14:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:46.522 14:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.522 14:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57405 00:04:46.522 killing process with pid 57405 00:04:46.522 14:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.522 14:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.522 14:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57405' 00:04:46.522 14:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57405 00:04:46.522 14:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57405 00:04:47.088 14:41:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:47.088 14:41:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:47.088 00:04:47.088 real 0m8.444s 00:04:47.088 user 0m8.112s 00:04:47.088 sys 0m0.550s 00:04:47.088 14:41:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.088 ************************************ 00:04:47.088 END TEST skip_rpc_with_json 00:04:47.088 ************************************ 00:04:47.088 14:41:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.088 14:41:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:47.088 14:41:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.088 14:41:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.088 14:41:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.347 ************************************ 00:04:47.347 START TEST skip_rpc_with_delay 00:04:47.347 ************************************ 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:47.347 [2024-11-17 14:41:32.714502] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:47.347 00:04:47.347 real 0m0.122s 00:04:47.347 user 0m0.067s 00:04:47.347 sys 0m0.054s 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.347 ************************************ 00:04:47.347 END TEST skip_rpc_with_delay 00:04:47.347 ************************************ 00:04:47.347 14:41:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:47.347 14:41:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:47.347 14:41:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:47.347 14:41:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:47.347 14:41:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.347 14:41:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.347 14:41:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.347 ************************************ 00:04:47.347 START TEST exit_on_failed_rpc_init 00:04:47.347 ************************************ 00:04:47.347 14:41:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:47.347 14:41:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57522 00:04:47.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.347 14:41:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57522 00:04:47.347 14:41:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57522 ']' 00:04:47.347 14:41:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.347 14:41:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.347 14:41:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.347 14:41:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.347 14:41:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.347 14:41:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:47.606 [2024-11-17 14:41:32.892596] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:47.606 [2024-11-17 14:41:32.892707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57522 ] 00:04:47.606 [2024-11-17 14:41:33.044775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.606 [2024-11-17 14:41:33.118201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:48.172 14:41:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.430 [2024-11-17 14:41:33.756702] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:48.430 [2024-11-17 14:41:33.756814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57540 ] 00:04:48.430 [2024-11-17 14:41:33.914911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.689 [2024-11-17 14:41:34.007207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.689 [2024-11-17 14:41:34.007282] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:48.689 [2024-11-17 14:41:34.007295] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:48.689 [2024-11-17 14:41:34.007307] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57522 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57522 ']' 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57522 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57522 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57522' 00:04:48.689 killing process with pid 57522 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57522 00:04:48.689 14:41:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57522 00:04:50.065 00:04:50.065 real 0m2.554s 00:04:50.065 user 0m2.832s 00:04:50.065 sys 0m0.377s 00:04:50.065 14:41:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.065 ************************************ 00:04:50.065 END TEST exit_on_failed_rpc_init 00:04:50.065 ************************************ 00:04:50.065 14:41:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.065 14:41:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:50.065 ************************************ 00:04:50.065 END TEST skip_rpc 00:04:50.065 ************************************ 00:04:50.065 00:04:50.065 real 0m17.675s 00:04:50.065 user 0m16.984s 00:04:50.065 sys 0m1.415s 00:04:50.065 14:41:35 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.065 14:41:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.065 14:41:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:50.065 14:41:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.065 14:41:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.065 14:41:35 -- common/autotest_common.sh@10 -- # set +x 00:04:50.065 ************************************ 00:04:50.065 START TEST rpc_client 00:04:50.065 ************************************ 00:04:50.065 14:41:35 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:50.065 * Looking for test storage... 00:04:50.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:50.065 14:41:35 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.065 14:41:35 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.065 14:41:35 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.065 14:41:35 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.065 14:41:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:50.065 14:41:35 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.065 14:41:35 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.065 --rc genhtml_branch_coverage=1 00:04:50.065 --rc genhtml_function_coverage=1 00:04:50.065 --rc genhtml_legend=1 00:04:50.065 --rc geninfo_all_blocks=1 00:04:50.065 --rc geninfo_unexecuted_blocks=1 00:04:50.065 00:04:50.065 ' 00:04:50.065 14:41:35 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.065 --rc genhtml_branch_coverage=1 00:04:50.065 --rc genhtml_function_coverage=1 00:04:50.065 --rc genhtml_legend=1 00:04:50.065 --rc geninfo_all_blocks=1 00:04:50.065 --rc geninfo_unexecuted_blocks=1 00:04:50.065 00:04:50.065 ' 00:04:50.065 14:41:35 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.065 --rc genhtml_branch_coverage=1 00:04:50.065 --rc genhtml_function_coverage=1 00:04:50.065 --rc genhtml_legend=1 00:04:50.065 --rc geninfo_all_blocks=1 00:04:50.065 --rc geninfo_unexecuted_blocks=1 00:04:50.065 00:04:50.065 ' 00:04:50.065 14:41:35 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.065 --rc genhtml_branch_coverage=1 00:04:50.065 --rc genhtml_function_coverage=1 00:04:50.065 --rc genhtml_legend=1 00:04:50.065 --rc geninfo_all_blocks=1 00:04:50.065 --rc geninfo_unexecuted_blocks=1 00:04:50.065 00:04:50.065 ' 00:04:50.065 14:41:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:50.324 OK 00:04:50.324 14:41:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:50.324 00:04:50.324 real 0m0.174s 00:04:50.324 user 0m0.096s 00:04:50.324 sys 0m0.081s 00:04:50.324 14:41:35 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.324 ************************************ 00:04:50.324 END TEST rpc_client 00:04:50.324 ************************************ 00:04:50.324 14:41:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:50.324 14:41:35 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.324 14:41:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.324 14:41:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.324 14:41:35 -- common/autotest_common.sh@10 -- # set +x 00:04:50.324 ************************************ 00:04:50.324 START TEST json_config 00:04:50.324 ************************************ 00:04:50.324 14:41:35 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.324 14:41:35 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.324 14:41:35 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.324 14:41:35 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.324 14:41:35 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.324 14:41:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.324 14:41:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.324 14:41:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.324 14:41:35 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.324 14:41:35 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.324 14:41:35 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.324 14:41:35 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.324 14:41:35 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.324 14:41:35 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.324 14:41:35 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.324 14:41:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.324 14:41:35 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:50.324 14:41:35 json_config -- scripts/common.sh@345 -- # : 1 00:04:50.324 14:41:35 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.324 14:41:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.324 14:41:35 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:50.324 14:41:35 json_config -- scripts/common.sh@353 -- # local d=1 00:04:50.324 14:41:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.324 14:41:35 json_config -- scripts/common.sh@355 -- # echo 1 00:04:50.324 14:41:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.324 14:41:35 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:50.324 14:41:35 json_config -- scripts/common.sh@353 -- # local d=2 00:04:50.324 14:41:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.324 14:41:35 json_config -- scripts/common.sh@355 -- # echo 2 00:04:50.324 14:41:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.324 14:41:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.324 14:41:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.324 14:41:35 json_config -- scripts/common.sh@368 -- # return 0 00:04:50.324 14:41:35 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.324 14:41:35 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.324 --rc genhtml_branch_coverage=1 00:04:50.324 --rc genhtml_function_coverage=1 00:04:50.324 --rc genhtml_legend=1 00:04:50.324 --rc geninfo_all_blocks=1 00:04:50.324 --rc geninfo_unexecuted_blocks=1 00:04:50.324 00:04:50.324 ' 00:04:50.324 14:41:35 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.324 --rc genhtml_branch_coverage=1 00:04:50.324 --rc genhtml_function_coverage=1 00:04:50.324 --rc genhtml_legend=1 00:04:50.324 --rc geninfo_all_blocks=1 00:04:50.324 --rc geninfo_unexecuted_blocks=1 00:04:50.324 00:04:50.324 ' 00:04:50.324 14:41:35 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.324 --rc genhtml_branch_coverage=1 00:04:50.324 --rc genhtml_function_coverage=1 00:04:50.324 --rc genhtml_legend=1 00:04:50.324 --rc geninfo_all_blocks=1 00:04:50.324 --rc geninfo_unexecuted_blocks=1 00:04:50.324 00:04:50.324 ' 00:04:50.324 14:41:35 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.324 --rc genhtml_branch_coverage=1 00:04:50.324 --rc genhtml_function_coverage=1 00:04:50.324 --rc genhtml_legend=1 00:04:50.324 --rc geninfo_all_blocks=1 00:04:50.324 --rc geninfo_unexecuted_blocks=1 00:04:50.324 00:04:50.324 ' 00:04:50.324 14:41:35 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4954d9b-ee6f-472e-b8b2-399f71fd8a7f 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a4954d9b-ee6f-472e-b8b2-399f71fd8a7f 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.324 14:41:35 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.325 14:41:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.325 14:41:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.325 14:41:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.325 14:41:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.325 14:41:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.325 14:41:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.325 14:41:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.325 14:41:35 json_config -- paths/export.sh@5 -- # export PATH 00:04:50.325 14:41:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.325 14:41:35 json_config -- nvmf/common.sh@51 -- # : 0 00:04:50.325 14:41:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.325 14:41:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.325 14:41:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.325 14:41:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.325 14:41:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.325 14:41:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.325 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.325 14:41:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.325 14:41:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.325 14:41:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.325 14:41:35 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:50.325 14:41:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:50.325 14:41:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:50.325 14:41:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:50.325 14:41:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:50.325 14:41:35 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:50.325 WARNING: No tests are enabled so not running JSON configuration tests 00:04:50.325 14:41:35 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:50.325 00:04:50.325 real 0m0.143s 00:04:50.325 user 0m0.091s 00:04:50.325 sys 0m0.050s 00:04:50.325 14:41:35 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.325 14:41:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.325 ************************************ 00:04:50.325 END TEST json_config 00:04:50.325 ************************************ 00:04:50.585 14:41:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.585 14:41:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.585 14:41:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.585 14:41:35 -- common/autotest_common.sh@10 -- # set +x 00:04:50.585 ************************************ 00:04:50.585 START TEST json_config_extra_key 00:04:50.585 ************************************ 00:04:50.585 14:41:35 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.585 14:41:35 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.585 14:41:35 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.585 14:41:35 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.585 14:41:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:50.585 14:41:36 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.585 14:41:36 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.585 --rc genhtml_branch_coverage=1 00:04:50.585 --rc genhtml_function_coverage=1 00:04:50.585 --rc genhtml_legend=1 00:04:50.585 --rc geninfo_all_blocks=1 00:04:50.585 --rc geninfo_unexecuted_blocks=1 00:04:50.585 00:04:50.585 ' 00:04:50.585 14:41:36 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.585 --rc genhtml_branch_coverage=1 00:04:50.585 --rc genhtml_function_coverage=1 00:04:50.585 --rc genhtml_legend=1 00:04:50.585 --rc geninfo_all_blocks=1 00:04:50.585 --rc geninfo_unexecuted_blocks=1 00:04:50.585 00:04:50.585 ' 00:04:50.585 14:41:36 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.585 --rc genhtml_branch_coverage=1 00:04:50.585 --rc genhtml_function_coverage=1 00:04:50.585 --rc genhtml_legend=1 00:04:50.585 --rc geninfo_all_blocks=1 00:04:50.585 --rc geninfo_unexecuted_blocks=1 00:04:50.585 00:04:50.585 ' 00:04:50.585 14:41:36 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.585 --rc genhtml_branch_coverage=1 00:04:50.585 --rc genhtml_function_coverage=1 00:04:50.585 --rc genhtml_legend=1 00:04:50.585 --rc geninfo_all_blocks=1 00:04:50.585 --rc geninfo_unexecuted_blocks=1 00:04:50.585 00:04:50.585 ' 00:04:50.585 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4954d9b-ee6f-472e-b8b2-399f71fd8a7f 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a4954d9b-ee6f-472e-b8b2-399f71fd8a7f 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.585 14:41:36 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.585 14:41:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.585 14:41:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.585 14:41:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.585 14:41:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:50.585 14:41:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.585 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.585 14:41:36 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.585 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:50.586 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:50.586 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:50.586 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.586 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:50.586 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.586 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:50.586 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:50.586 INFO: launching applications... 00:04:50.586 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:50.586 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.586 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:50.586 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.586 14:41:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:50.586 14:41:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:50.586 14:41:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.586 14:41:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.586 14:41:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.586 14:41:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.586 14:41:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.586 Waiting for target to run... 00:04:50.586 14:41:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57734 00:04:50.586 14:41:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.586 14:41:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57734 /var/tmp/spdk_tgt.sock 00:04:50.586 14:41:36 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57734 ']' 00:04:50.586 14:41:36 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.586 14:41:36 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.586 14:41:36 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.586 14:41:36 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.586 14:41:36 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.586 14:41:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.586 [2024-11-17 14:41:36.112769] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:50.586 [2024-11-17 14:41:36.112893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57734 ] 00:04:51.152 [2024-11-17 14:41:36.425038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.152 [2024-11-17 14:41:36.513983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.718 00:04:51.718 INFO: shutting down applications... 00:04:51.718 14:41:36 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.718 14:41:36 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:51.718 14:41:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:51.718 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:51.718 14:41:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:51.718 14:41:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:51.718 14:41:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.718 14:41:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57734 ]] 00:04:51.718 14:41:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57734 00:04:51.718 14:41:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.718 14:41:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.718 14:41:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57734 00:04:51.718 14:41:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.977 14:41:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.977 14:41:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.977 14:41:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57734 00:04:51.977 14:41:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.545 14:41:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.545 14:41:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.545 14:41:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57734 00:04:52.545 14:41:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.112 14:41:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.112 14:41:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.112 14:41:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57734 00:04:53.112 14:41:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.679 14:41:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.679 14:41:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.679 14:41:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57734 00:04:53.679 14:41:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:53.679 14:41:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:53.679 14:41:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:53.679 SPDK target shutdown done 00:04:53.679 Success 00:04:53.679 14:41:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:53.679 14:41:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:53.679 00:04:53.679 real 0m3.120s 00:04:53.679 user 0m2.704s 00:04:53.679 sys 0m0.390s 00:04:53.679 14:41:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.679 ************************************ 00:04:53.679 14:41:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.679 END TEST json_config_extra_key 00:04:53.679 ************************************ 00:04:53.680 14:41:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.680 14:41:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.680 14:41:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.680 14:41:39 -- common/autotest_common.sh@10 -- # set +x 00:04:53.680 ************************************ 00:04:53.680 START TEST alias_rpc 00:04:53.680 ************************************ 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.680 * Looking for test storage... 00:04:53.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.680 14:41:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.680 --rc genhtml_branch_coverage=1 00:04:53.680 --rc genhtml_function_coverage=1 00:04:53.680 --rc genhtml_legend=1 00:04:53.680 --rc geninfo_all_blocks=1 00:04:53.680 --rc geninfo_unexecuted_blocks=1 00:04:53.680 00:04:53.680 ' 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.680 --rc genhtml_branch_coverage=1 00:04:53.680 --rc genhtml_function_coverage=1 00:04:53.680 --rc genhtml_legend=1 00:04:53.680 --rc geninfo_all_blocks=1 00:04:53.680 --rc geninfo_unexecuted_blocks=1 00:04:53.680 00:04:53.680 ' 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.680 --rc genhtml_branch_coverage=1 00:04:53.680 --rc genhtml_function_coverage=1 00:04:53.680 --rc genhtml_legend=1 00:04:53.680 --rc geninfo_all_blocks=1 00:04:53.680 --rc geninfo_unexecuted_blocks=1 00:04:53.680 00:04:53.680 ' 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.680 --rc genhtml_branch_coverage=1 00:04:53.680 --rc genhtml_function_coverage=1 00:04:53.680 --rc genhtml_legend=1 00:04:53.680 --rc geninfo_all_blocks=1 00:04:53.680 --rc geninfo_unexecuted_blocks=1 00:04:53.680 00:04:53.680 ' 00:04:53.680 14:41:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.680 14:41:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57827 00:04:53.680 14:41:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57827 00:04:53.680 14:41:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57827 ']' 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.680 14:41:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.937 [2024-11-17 14:41:39.283214] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:53.937 [2024-11-17 14:41:39.283337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57827 ] 00:04:53.937 [2024-11-17 14:41:39.444956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.195 [2024-11-17 14:41:39.539322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.763 14:41:40 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.763 14:41:40 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:54.763 14:41:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:55.022 14:41:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57827 00:04:55.022 14:41:40 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57827 ']' 00:04:55.022 14:41:40 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57827 00:04:55.022 14:41:40 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:55.022 14:41:40 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.022 14:41:40 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57827 00:04:55.022 14:41:40 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.022 killing process with pid 57827 00:04:55.022 14:41:40 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.022 14:41:40 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57827' 00:04:55.022 14:41:40 alias_rpc -- common/autotest_common.sh@973 -- # kill 57827 00:04:55.022 14:41:40 alias_rpc -- common/autotest_common.sh@978 -- # wait 57827 00:04:56.460 00:04:56.460 real 0m2.710s 00:04:56.460 user 0m2.792s 00:04:56.460 sys 0m0.411s 00:04:56.460 14:41:41 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.460 ************************************ 00:04:56.460 14:41:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.460 END TEST alias_rpc 00:04:56.460 ************************************ 00:04:56.460 14:41:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:56.460 14:41:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.460 14:41:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.460 14:41:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.460 14:41:41 -- common/autotest_common.sh@10 -- # set +x 00:04:56.460 ************************************ 00:04:56.460 START TEST spdkcli_tcp 00:04:56.460 ************************************ 00:04:56.460 14:41:41 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.460 * Looking for test storage... 00:04:56.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:56.460 14:41:41 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.460 14:41:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.460 14:41:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.460 14:41:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.460 14:41:41 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.460 14:41:41 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.460 14:41:41 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.460 14:41:41 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.460 14:41:41 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.460 14:41:41 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.460 14:41:41 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.460 14:41:41 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.460 14:41:41 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.460 14:41:41 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.460 14:41:41 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.461 14:41:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.461 --rc genhtml_branch_coverage=1 00:04:56.461 --rc genhtml_function_coverage=1 00:04:56.461 --rc genhtml_legend=1 00:04:56.461 --rc geninfo_all_blocks=1 00:04:56.461 --rc geninfo_unexecuted_blocks=1 00:04:56.461 00:04:56.461 ' 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.461 --rc genhtml_branch_coverage=1 00:04:56.461 --rc genhtml_function_coverage=1 00:04:56.461 --rc genhtml_legend=1 00:04:56.461 --rc geninfo_all_blocks=1 00:04:56.461 --rc geninfo_unexecuted_blocks=1 00:04:56.461 00:04:56.461 ' 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.461 --rc genhtml_branch_coverage=1 00:04:56.461 --rc genhtml_function_coverage=1 00:04:56.461 --rc genhtml_legend=1 00:04:56.461 --rc geninfo_all_blocks=1 00:04:56.461 --rc geninfo_unexecuted_blocks=1 00:04:56.461 00:04:56.461 ' 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.461 --rc genhtml_branch_coverage=1 00:04:56.461 --rc genhtml_function_coverage=1 00:04:56.461 --rc genhtml_legend=1 00:04:56.461 --rc geninfo_all_blocks=1 00:04:56.461 --rc geninfo_unexecuted_blocks=1 00:04:56.461 00:04:56.461 ' 00:04:56.461 14:41:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:56.461 14:41:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:56.461 14:41:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:56.461 14:41:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:56.461 14:41:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:56.461 14:41:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:56.461 14:41:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.461 14:41:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57917 00:04:56.461 14:41:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57917 00:04:56.461 14:41:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57917 ']' 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.461 14:41:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.719 [2024-11-17 14:41:42.014608] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:56.719 [2024-11-17 14:41:42.015079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57917 ] 00:04:56.719 [2024-11-17 14:41:42.174566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.978 [2024-11-17 14:41:42.268685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.978 [2024-11-17 14:41:42.268759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.545 14:41:42 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.545 14:41:42 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:57.545 14:41:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:57.545 14:41:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57934 00:04:57.545 14:41:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:57.545 [ 00:04:57.545 "bdev_malloc_delete", 00:04:57.545 "bdev_malloc_create", 00:04:57.545 "bdev_null_resize", 00:04:57.545 "bdev_null_delete", 00:04:57.545 "bdev_null_create", 00:04:57.545 "bdev_nvme_cuse_unregister", 00:04:57.545 "bdev_nvme_cuse_register", 00:04:57.545 "bdev_opal_new_user", 00:04:57.545 "bdev_opal_set_lock_state", 00:04:57.545 "bdev_opal_delete", 00:04:57.545 "bdev_opal_get_info", 00:04:57.545 "bdev_opal_create", 00:04:57.545 "bdev_nvme_opal_revert", 00:04:57.545 "bdev_nvme_opal_init", 00:04:57.545 "bdev_nvme_send_cmd", 00:04:57.545 "bdev_nvme_set_keys", 00:04:57.545 "bdev_nvme_get_path_iostat", 00:04:57.545 "bdev_nvme_get_mdns_discovery_info", 00:04:57.545 "bdev_nvme_stop_mdns_discovery", 00:04:57.545 "bdev_nvme_start_mdns_discovery", 00:04:57.545 "bdev_nvme_set_multipath_policy", 00:04:57.545 "bdev_nvme_set_preferred_path", 00:04:57.545 "bdev_nvme_get_io_paths", 00:04:57.545 "bdev_nvme_remove_error_injection", 00:04:57.545 "bdev_nvme_add_error_injection", 00:04:57.545 "bdev_nvme_get_discovery_info", 00:04:57.545 "bdev_nvme_stop_discovery", 00:04:57.545 "bdev_nvme_start_discovery", 00:04:57.545 "bdev_nvme_get_controller_health_info", 00:04:57.545 "bdev_nvme_disable_controller", 00:04:57.545 "bdev_nvme_enable_controller", 00:04:57.545 "bdev_nvme_reset_controller", 00:04:57.545 "bdev_nvme_get_transport_statistics", 00:04:57.545 "bdev_nvme_apply_firmware", 00:04:57.545 "bdev_nvme_detach_controller", 00:04:57.545 "bdev_nvme_get_controllers", 00:04:57.545 "bdev_nvme_attach_controller", 00:04:57.545 "bdev_nvme_set_hotplug", 00:04:57.545 "bdev_nvme_set_options", 00:04:57.545 "bdev_passthru_delete", 00:04:57.545 "bdev_passthru_create", 00:04:57.545 "bdev_lvol_set_parent_bdev", 00:04:57.545 "bdev_lvol_set_parent", 00:04:57.545 "bdev_lvol_check_shallow_copy", 00:04:57.545 "bdev_lvol_start_shallow_copy", 00:04:57.545 "bdev_lvol_grow_lvstore", 00:04:57.545 "bdev_lvol_get_lvols", 00:04:57.545 "bdev_lvol_get_lvstores", 00:04:57.545 "bdev_lvol_delete", 00:04:57.545 "bdev_lvol_set_read_only", 00:04:57.545 "bdev_lvol_resize", 00:04:57.545 "bdev_lvol_decouple_parent", 00:04:57.545 "bdev_lvol_inflate", 00:04:57.545 "bdev_lvol_rename", 00:04:57.545 "bdev_lvol_clone_bdev", 00:04:57.545 "bdev_lvol_clone", 00:04:57.545 "bdev_lvol_snapshot", 00:04:57.545 "bdev_lvol_create", 00:04:57.545 "bdev_lvol_delete_lvstore", 00:04:57.545 "bdev_lvol_rename_lvstore", 00:04:57.545 "bdev_lvol_create_lvstore", 00:04:57.545 "bdev_raid_set_options", 00:04:57.545 "bdev_raid_remove_base_bdev", 00:04:57.545 "bdev_raid_add_base_bdev", 00:04:57.545 "bdev_raid_delete", 00:04:57.545 "bdev_raid_create", 00:04:57.545 "bdev_raid_get_bdevs", 00:04:57.545 "bdev_error_inject_error", 00:04:57.545 "bdev_error_delete", 00:04:57.545 "bdev_error_create", 00:04:57.545 "bdev_split_delete", 00:04:57.545 "bdev_split_create", 00:04:57.545 "bdev_delay_delete", 00:04:57.545 "bdev_delay_create", 00:04:57.545 "bdev_delay_update_latency", 00:04:57.545 "bdev_zone_block_delete", 00:04:57.545 "bdev_zone_block_create", 00:04:57.545 "blobfs_create", 00:04:57.545 "blobfs_detect", 00:04:57.545 "blobfs_set_cache_size", 00:04:57.545 "bdev_xnvme_delete", 00:04:57.545 "bdev_xnvme_create", 00:04:57.545 "bdev_aio_delete", 00:04:57.545 "bdev_aio_rescan", 00:04:57.545 "bdev_aio_create", 00:04:57.545 "bdev_ftl_set_property", 00:04:57.545 "bdev_ftl_get_properties", 00:04:57.545 "bdev_ftl_get_stats", 00:04:57.545 "bdev_ftl_unmap", 00:04:57.545 "bdev_ftl_unload", 00:04:57.545 "bdev_ftl_delete", 00:04:57.545 "bdev_ftl_load", 00:04:57.545 "bdev_ftl_create", 00:04:57.545 "bdev_virtio_attach_controller", 00:04:57.545 "bdev_virtio_scsi_get_devices", 00:04:57.545 "bdev_virtio_detach_controller", 00:04:57.545 "bdev_virtio_blk_set_hotplug", 00:04:57.545 "bdev_iscsi_delete", 00:04:57.545 "bdev_iscsi_create", 00:04:57.545 "bdev_iscsi_set_options", 00:04:57.545 "accel_error_inject_error", 00:04:57.545 "ioat_scan_accel_module", 00:04:57.545 "dsa_scan_accel_module", 00:04:57.545 "iaa_scan_accel_module", 00:04:57.545 "keyring_file_remove_key", 00:04:57.545 "keyring_file_add_key", 00:04:57.545 "keyring_linux_set_options", 00:04:57.545 "fsdev_aio_delete", 00:04:57.545 "fsdev_aio_create", 00:04:57.545 "iscsi_get_histogram", 00:04:57.545 "iscsi_enable_histogram", 00:04:57.545 "iscsi_set_options", 00:04:57.545 "iscsi_get_auth_groups", 00:04:57.545 "iscsi_auth_group_remove_secret", 00:04:57.545 "iscsi_auth_group_add_secret", 00:04:57.545 "iscsi_delete_auth_group", 00:04:57.545 "iscsi_create_auth_group", 00:04:57.545 "iscsi_set_discovery_auth", 00:04:57.545 "iscsi_get_options", 00:04:57.545 "iscsi_target_node_request_logout", 00:04:57.545 "iscsi_target_node_set_redirect", 00:04:57.545 "iscsi_target_node_set_auth", 00:04:57.545 "iscsi_target_node_add_lun", 00:04:57.545 "iscsi_get_stats", 00:04:57.545 "iscsi_get_connections", 00:04:57.545 "iscsi_portal_group_set_auth", 00:04:57.545 "iscsi_start_portal_group", 00:04:57.545 "iscsi_delete_portal_group", 00:04:57.545 "iscsi_create_portal_group", 00:04:57.545 "iscsi_get_portal_groups", 00:04:57.545 "iscsi_delete_target_node", 00:04:57.545 "iscsi_target_node_remove_pg_ig_maps", 00:04:57.545 "iscsi_target_node_add_pg_ig_maps", 00:04:57.545 "iscsi_create_target_node", 00:04:57.545 "iscsi_get_target_nodes", 00:04:57.545 "iscsi_delete_initiator_group", 00:04:57.545 "iscsi_initiator_group_remove_initiators", 00:04:57.545 "iscsi_initiator_group_add_initiators", 00:04:57.545 "iscsi_create_initiator_group", 00:04:57.545 "iscsi_get_initiator_groups", 00:04:57.545 "nvmf_set_crdt", 00:04:57.545 "nvmf_set_config", 00:04:57.545 "nvmf_set_max_subsystems", 00:04:57.545 "nvmf_stop_mdns_prr", 00:04:57.545 "nvmf_publish_mdns_prr", 00:04:57.545 "nvmf_subsystem_get_listeners", 00:04:57.545 "nvmf_subsystem_get_qpairs", 00:04:57.545 "nvmf_subsystem_get_controllers", 00:04:57.545 "nvmf_get_stats", 00:04:57.545 "nvmf_get_transports", 00:04:57.545 "nvmf_create_transport", 00:04:57.545 "nvmf_get_targets", 00:04:57.545 "nvmf_delete_target", 00:04:57.545 "nvmf_create_target", 00:04:57.545 "nvmf_subsystem_allow_any_host", 00:04:57.545 "nvmf_subsystem_set_keys", 00:04:57.545 "nvmf_subsystem_remove_host", 00:04:57.545 "nvmf_subsystem_add_host", 00:04:57.545 "nvmf_ns_remove_host", 00:04:57.545 "nvmf_ns_add_host", 00:04:57.545 "nvmf_subsystem_remove_ns", 00:04:57.545 "nvmf_subsystem_set_ns_ana_group", 00:04:57.545 "nvmf_subsystem_add_ns", 00:04:57.545 "nvmf_subsystem_listener_set_ana_state", 00:04:57.545 "nvmf_discovery_get_referrals", 00:04:57.545 "nvmf_discovery_remove_referral", 00:04:57.545 "nvmf_discovery_add_referral", 00:04:57.545 "nvmf_subsystem_remove_listener", 00:04:57.545 "nvmf_subsystem_add_listener", 00:04:57.545 "nvmf_delete_subsystem", 00:04:57.545 "nvmf_create_subsystem", 00:04:57.545 "nvmf_get_subsystems", 00:04:57.545 "env_dpdk_get_mem_stats", 00:04:57.545 "nbd_get_disks", 00:04:57.545 "nbd_stop_disk", 00:04:57.545 "nbd_start_disk", 00:04:57.545 "ublk_recover_disk", 00:04:57.545 "ublk_get_disks", 00:04:57.545 "ublk_stop_disk", 00:04:57.546 "ublk_start_disk", 00:04:57.546 "ublk_destroy_target", 00:04:57.546 "ublk_create_target", 00:04:57.546 "virtio_blk_create_transport", 00:04:57.546 "virtio_blk_get_transports", 00:04:57.546 "vhost_controller_set_coalescing", 00:04:57.546 "vhost_get_controllers", 00:04:57.546 "vhost_delete_controller", 00:04:57.546 "vhost_create_blk_controller", 00:04:57.546 "vhost_scsi_controller_remove_target", 00:04:57.546 "vhost_scsi_controller_add_target", 00:04:57.546 "vhost_start_scsi_controller", 00:04:57.546 "vhost_create_scsi_controller", 00:04:57.546 "thread_set_cpumask", 00:04:57.546 "scheduler_set_options", 00:04:57.546 "framework_get_governor", 00:04:57.546 "framework_get_scheduler", 00:04:57.546 "framework_set_scheduler", 00:04:57.546 "framework_get_reactors", 00:04:57.546 "thread_get_io_channels", 00:04:57.546 "thread_get_pollers", 00:04:57.546 "thread_get_stats", 00:04:57.546 "framework_monitor_context_switch", 00:04:57.546 "spdk_kill_instance", 00:04:57.546 "log_enable_timestamps", 00:04:57.546 "log_get_flags", 00:04:57.546 "log_clear_flag", 00:04:57.546 "log_set_flag", 00:04:57.546 "log_get_level", 00:04:57.546 "log_set_level", 00:04:57.546 "log_get_print_level", 00:04:57.546 "log_set_print_level", 00:04:57.546 "framework_enable_cpumask_locks", 00:04:57.546 "framework_disable_cpumask_locks", 00:04:57.546 "framework_wait_init", 00:04:57.546 "framework_start_init", 00:04:57.546 "scsi_get_devices", 00:04:57.546 "bdev_get_histogram", 00:04:57.546 "bdev_enable_histogram", 00:04:57.546 "bdev_set_qos_limit", 00:04:57.546 "bdev_set_qd_sampling_period", 00:04:57.546 "bdev_get_bdevs", 00:04:57.546 "bdev_reset_iostat", 00:04:57.546 "bdev_get_iostat", 00:04:57.546 "bdev_examine", 00:04:57.546 "bdev_wait_for_examine", 00:04:57.546 "bdev_set_options", 00:04:57.546 "accel_get_stats", 00:04:57.546 "accel_set_options", 00:04:57.546 "accel_set_driver", 00:04:57.546 "accel_crypto_key_destroy", 00:04:57.546 "accel_crypto_keys_get", 00:04:57.546 "accel_crypto_key_create", 00:04:57.546 "accel_assign_opc", 00:04:57.546 "accel_get_module_info", 00:04:57.546 "accel_get_opc_assignments", 00:04:57.546 "vmd_rescan", 00:04:57.546 "vmd_remove_device", 00:04:57.546 "vmd_enable", 00:04:57.546 "sock_get_default_impl", 00:04:57.546 "sock_set_default_impl", 00:04:57.546 "sock_impl_set_options", 00:04:57.546 "sock_impl_get_options", 00:04:57.546 "iobuf_get_stats", 00:04:57.546 "iobuf_set_options", 00:04:57.546 "keyring_get_keys", 00:04:57.546 "framework_get_pci_devices", 00:04:57.546 "framework_get_config", 00:04:57.546 "framework_get_subsystems", 00:04:57.546 "fsdev_set_opts", 00:04:57.546 "fsdev_get_opts", 00:04:57.546 "trace_get_info", 00:04:57.546 "trace_get_tpoint_group_mask", 00:04:57.546 "trace_disable_tpoint_group", 00:04:57.546 "trace_enable_tpoint_group", 00:04:57.546 "trace_clear_tpoint_mask", 00:04:57.546 "trace_set_tpoint_mask", 00:04:57.546 "notify_get_notifications", 00:04:57.546 "notify_get_types", 00:04:57.546 "spdk_get_version", 00:04:57.546 "rpc_get_methods" 00:04:57.546 ] 00:04:57.546 14:41:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:57.546 14:41:43 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.546 14:41:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.546 14:41:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:57.546 14:41:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57917 00:04:57.546 14:41:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57917 ']' 00:04:57.546 14:41:43 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57917 00:04:57.546 14:41:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:57.546 14:41:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.546 14:41:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57917 00:04:57.806 killing process with pid 57917 00:04:57.806 14:41:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.806 14:41:43 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.806 14:41:43 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57917' 00:04:57.807 14:41:43 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57917 00:04:57.807 14:41:43 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57917 00:04:59.192 ************************************ 00:04:59.192 END TEST spdkcli_tcp 00:04:59.192 ************************************ 00:04:59.192 00:04:59.192 real 0m2.738s 00:04:59.192 user 0m4.953s 00:04:59.192 sys 0m0.412s 00:04:59.192 14:41:44 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.192 14:41:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.192 14:41:44 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.192 14:41:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.192 14:41:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.192 14:41:44 -- common/autotest_common.sh@10 -- # set +x 00:04:59.192 ************************************ 00:04:59.192 START TEST dpdk_mem_utility 00:04:59.192 ************************************ 00:04:59.192 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.192 * Looking for test storage... 00:04:59.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:59.192 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.193 14:41:44 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:59.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.193 --rc genhtml_branch_coverage=1 00:04:59.193 --rc genhtml_function_coverage=1 00:04:59.193 --rc genhtml_legend=1 00:04:59.193 --rc geninfo_all_blocks=1 00:04:59.193 --rc geninfo_unexecuted_blocks=1 00:04:59.193 00:04:59.193 ' 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.193 --rc genhtml_branch_coverage=1 00:04:59.193 --rc genhtml_function_coverage=1 00:04:59.193 --rc genhtml_legend=1 00:04:59.193 --rc geninfo_all_blocks=1 00:04:59.193 --rc geninfo_unexecuted_blocks=1 00:04:59.193 00:04:59.193 ' 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.193 --rc genhtml_branch_coverage=1 00:04:59.193 --rc genhtml_function_coverage=1 00:04:59.193 --rc genhtml_legend=1 00:04:59.193 --rc geninfo_all_blocks=1 00:04:59.193 --rc geninfo_unexecuted_blocks=1 00:04:59.193 00:04:59.193 ' 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.193 --rc genhtml_branch_coverage=1 00:04:59.193 --rc genhtml_function_coverage=1 00:04:59.193 --rc genhtml_legend=1 00:04:59.193 --rc geninfo_all_blocks=1 00:04:59.193 --rc geninfo_unexecuted_blocks=1 00:04:59.193 00:04:59.193 ' 00:04:59.193 14:41:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.193 14:41:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58027 00:04:59.193 14:41:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58027 00:04:59.193 14:41:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58027 ']' 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.193 14:41:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.454 [2024-11-17 14:41:44.791429] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:59.454 [2024-11-17 14:41:44.791547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58027 ] 00:04:59.454 [2024-11-17 14:41:44.945594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.713 [2024-11-17 14:41:45.021038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.281 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.281 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:00.281 14:41:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:00.281 14:41:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:00.281 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.281 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.281 { 00:05:00.281 "filename": "/tmp/spdk_mem_dump.txt" 00:05:00.281 } 00:05:00.281 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.281 14:41:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:00.281 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:00.281 1 heaps totaling size 816.000000 MiB 00:05:00.281 size: 816.000000 MiB heap id: 0 00:05:00.281 end heaps---------- 00:05:00.281 9 mempools totaling size 595.772034 MiB 00:05:00.281 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:00.281 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:00.281 size: 92.545471 MiB name: bdev_io_58027 00:05:00.281 size: 50.003479 MiB name: msgpool_58027 00:05:00.281 size: 36.509338 MiB name: fsdev_io_58027 00:05:00.281 size: 21.763794 MiB name: PDU_Pool 00:05:00.281 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:00.281 size: 4.133484 MiB name: evtpool_58027 00:05:00.281 size: 0.026123 MiB name: Session_Pool 00:05:00.281 end mempools------- 00:05:00.281 6 memzones totaling size 4.142822 MiB 00:05:00.281 size: 1.000366 MiB name: RG_ring_0_58027 00:05:00.281 size: 1.000366 MiB name: RG_ring_1_58027 00:05:00.281 size: 1.000366 MiB name: RG_ring_4_58027 00:05:00.281 size: 1.000366 MiB name: RG_ring_5_58027 00:05:00.281 size: 0.125366 MiB name: RG_ring_2_58027 00:05:00.281 size: 0.015991 MiB name: RG_ring_3_58027 00:05:00.281 end memzones------- 00:05:00.281 14:41:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:00.281 heap id: 0 total size: 816.000000 MiB number of busy elements: 327 number of free elements: 18 00:05:00.281 list of free elements. size: 16.788452 MiB 00:05:00.281 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:00.281 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:00.281 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:00.281 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:00.281 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:00.281 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:00.281 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:00.281 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:00.281 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:00.281 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:00.281 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:00.281 element at address: 0x20001ac00000 with size: 0.559021 MiB 00:05:00.281 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:00.281 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:00.281 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:00.281 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:00.281 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:00.281 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:00.281 list of standard malloc elements. size: 199.290649 MiB 00:05:00.281 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:00.281 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:00.281 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:00.281 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:00.281 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:00.281 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:00.281 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:00.281 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:00.281 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:00.281 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:00.281 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:00.281 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:00.281 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:00.281 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:00.281 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:00.282 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:00.282 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:00.282 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:00.283 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:00.283 list of memzone associated elements. size: 599.920898 MiB 00:05:00.283 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:00.283 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:00.283 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:00.283 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:00.283 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:00.283 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58027_0 00:05:00.283 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:00.283 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58027_0 00:05:00.283 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:00.283 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58027_0 00:05:00.283 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:00.283 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:00.283 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:00.283 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:00.283 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:00.283 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58027_0 00:05:00.283 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:00.283 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58027 00:05:00.283 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:00.283 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58027 00:05:00.283 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:00.283 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:00.283 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:00.283 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:00.283 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:00.283 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:00.283 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:00.283 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:00.283 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:00.283 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58027 00:05:00.283 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:00.283 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58027 00:05:00.283 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:00.283 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58027 00:05:00.283 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:00.283 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58027 00:05:00.283 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:00.283 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58027 00:05:00.283 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:00.283 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58027 00:05:00.283 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:00.283 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:00.283 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:00.283 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:00.283 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:00.283 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:00.283 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:00.283 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58027 00:05:00.283 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:00.283 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58027 00:05:00.283 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:00.283 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:00.283 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:00.283 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:00.283 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:00.283 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58027 00:05:00.283 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:00.283 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:00.283 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:00.283 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58027 00:05:00.283 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:00.283 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58027 00:05:00.283 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:00.283 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58027 00:05:00.283 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:00.283 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:00.283 14:41:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:00.284 14:41:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58027 00:05:00.284 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58027 ']' 00:05:00.284 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58027 00:05:00.284 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:00.284 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.284 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58027 00:05:00.284 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.284 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.284 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58027' 00:05:00.284 killing process with pid 58027 00:05:00.284 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58027 00:05:00.284 14:41:45 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58027 00:05:01.661 00:05:01.661 real 0m2.363s 00:05:01.661 user 0m2.430s 00:05:01.661 sys 0m0.351s 00:05:01.661 14:41:46 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.661 14:41:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.661 ************************************ 00:05:01.661 END TEST dpdk_mem_utility 00:05:01.661 ************************************ 00:05:01.661 14:41:46 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:01.661 14:41:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.661 14:41:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.661 14:41:46 -- common/autotest_common.sh@10 -- # set +x 00:05:01.661 ************************************ 00:05:01.661 START TEST event 00:05:01.661 ************************************ 00:05:01.661 14:41:47 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:01.661 * Looking for test storage... 00:05:01.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:01.661 14:41:47 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.661 14:41:47 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.661 14:41:47 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.661 14:41:47 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.661 14:41:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.661 14:41:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.661 14:41:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.661 14:41:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.661 14:41:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.661 14:41:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.661 14:41:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.661 14:41:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.661 14:41:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.661 14:41:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.661 14:41:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.661 14:41:47 event -- scripts/common.sh@344 -- # case "$op" in 00:05:01.661 14:41:47 event -- scripts/common.sh@345 -- # : 1 00:05:01.661 14:41:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.661 14:41:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.661 14:41:47 event -- scripts/common.sh@365 -- # decimal 1 00:05:01.661 14:41:47 event -- scripts/common.sh@353 -- # local d=1 00:05:01.661 14:41:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.661 14:41:47 event -- scripts/common.sh@355 -- # echo 1 00:05:01.661 14:41:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.661 14:41:47 event -- scripts/common.sh@366 -- # decimal 2 00:05:01.661 14:41:47 event -- scripts/common.sh@353 -- # local d=2 00:05:01.661 14:41:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.661 14:41:47 event -- scripts/common.sh@355 -- # echo 2 00:05:01.661 14:41:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.661 14:41:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.661 14:41:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.661 14:41:47 event -- scripts/common.sh@368 -- # return 0 00:05:01.662 14:41:47 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.662 14:41:47 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.662 --rc genhtml_branch_coverage=1 00:05:01.662 --rc genhtml_function_coverage=1 00:05:01.662 --rc genhtml_legend=1 00:05:01.662 --rc geninfo_all_blocks=1 00:05:01.662 --rc geninfo_unexecuted_blocks=1 00:05:01.662 00:05:01.662 ' 00:05:01.662 14:41:47 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.662 --rc genhtml_branch_coverage=1 00:05:01.662 --rc genhtml_function_coverage=1 00:05:01.662 --rc genhtml_legend=1 00:05:01.662 --rc geninfo_all_blocks=1 00:05:01.662 --rc geninfo_unexecuted_blocks=1 00:05:01.662 00:05:01.662 ' 00:05:01.662 14:41:47 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.662 --rc genhtml_branch_coverage=1 00:05:01.662 --rc genhtml_function_coverage=1 00:05:01.662 --rc genhtml_legend=1 00:05:01.662 --rc geninfo_all_blocks=1 00:05:01.662 --rc geninfo_unexecuted_blocks=1 00:05:01.662 00:05:01.662 ' 00:05:01.662 14:41:47 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.662 --rc genhtml_branch_coverage=1 00:05:01.662 --rc genhtml_function_coverage=1 00:05:01.662 --rc genhtml_legend=1 00:05:01.662 --rc geninfo_all_blocks=1 00:05:01.662 --rc geninfo_unexecuted_blocks=1 00:05:01.662 00:05:01.662 ' 00:05:01.662 14:41:47 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:01.662 14:41:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:01.662 14:41:47 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:01.662 14:41:47 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:01.662 14:41:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.662 14:41:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.662 ************************************ 00:05:01.662 START TEST event_perf 00:05:01.662 ************************************ 00:05:01.662 14:41:47 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:01.662 Running I/O for 1 seconds...[2024-11-17 14:41:47.181550] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:01.662 [2024-11-17 14:41:47.181726] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58119 ] 00:05:01.920 [2024-11-17 14:41:47.337392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.920 [2024-11-17 14:41:47.420147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.920 [2024-11-17 14:41:47.420400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.920 [2024-11-17 14:41:47.420573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.920 Running I/O for 1 seconds...[2024-11-17 14:41:47.420596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.295 00:05:03.295 lcore 0: 207957 00:05:03.295 lcore 1: 207960 00:05:03.295 lcore 2: 207956 00:05:03.295 lcore 3: 207954 00:05:03.295 done. 00:05:03.295 00:05:03.295 real 0m1.393s 00:05:03.295 user 0m4.206s 00:05:03.295 sys 0m0.070s 00:05:03.295 14:41:48 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.295 14:41:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.295 ************************************ 00:05:03.295 END TEST event_perf 00:05:03.295 ************************************ 00:05:03.295 14:41:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:03.295 14:41:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:03.295 14:41:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.295 14:41:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.295 ************************************ 00:05:03.295 START TEST event_reactor 00:05:03.295 ************************************ 00:05:03.295 14:41:48 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:03.295 [2024-11-17 14:41:48.610389] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:03.295 [2024-11-17 14:41:48.610468] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58154 ] 00:05:03.295 [2024-11-17 14:41:48.753273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.295 [2024-11-17 14:41:48.829643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.674 test_start 00:05:04.674 oneshot 00:05:04.674 tick 100 00:05:04.674 tick 100 00:05:04.674 tick 250 00:05:04.674 tick 100 00:05:04.674 tick 100 00:05:04.674 tick 100 00:05:04.674 tick 250 00:05:04.674 tick 500 00:05:04.674 tick 100 00:05:04.674 tick 100 00:05:04.674 tick 250 00:05:04.674 tick 100 00:05:04.674 tick 100 00:05:04.674 test_end 00:05:04.674 ************************************ 00:05:04.674 END TEST event_reactor 00:05:04.674 ************************************ 00:05:04.674 00:05:04.674 real 0m1.359s 00:05:04.674 user 0m1.210s 00:05:04.674 sys 0m0.042s 00:05:04.674 14:41:49 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.674 14:41:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:04.674 14:41:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.674 14:41:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:04.674 14:41:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.674 14:41:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.674 ************************************ 00:05:04.674 START TEST event_reactor_perf 00:05:04.674 ************************************ 00:05:04.674 14:41:49 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.674 [2024-11-17 14:41:50.026498] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:04.674 [2024-11-17 14:41:50.026632] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58190 ] 00:05:04.674 [2024-11-17 14:41:50.185727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.934 [2024-11-17 14:41:50.261909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.872 test_start 00:05:05.872 test_end 00:05:05.872 Performance: 405103 events per second 00:05:05.872 00:05:05.872 real 0m1.384s 00:05:05.872 user 0m1.210s 00:05:05.872 sys 0m0.067s 00:05:05.872 14:41:51 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.872 ************************************ 00:05:05.872 END TEST event_reactor_perf 00:05:05.872 ************************************ 00:05:05.872 14:41:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.131 14:41:51 event -- event/event.sh@49 -- # uname -s 00:05:06.131 14:41:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:06.131 14:41:51 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:06.131 14:41:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.131 14:41:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.131 14:41:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.131 ************************************ 00:05:06.131 START TEST event_scheduler 00:05:06.131 ************************************ 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:06.131 * Looking for test storage... 00:05:06.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.131 14:41:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.131 --rc genhtml_branch_coverage=1 00:05:06.131 --rc genhtml_function_coverage=1 00:05:06.131 --rc genhtml_legend=1 00:05:06.131 --rc geninfo_all_blocks=1 00:05:06.131 --rc geninfo_unexecuted_blocks=1 00:05:06.131 00:05:06.131 ' 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.131 --rc genhtml_branch_coverage=1 00:05:06.131 --rc genhtml_function_coverage=1 00:05:06.131 --rc genhtml_legend=1 00:05:06.131 --rc geninfo_all_blocks=1 00:05:06.131 --rc geninfo_unexecuted_blocks=1 00:05:06.131 00:05:06.131 ' 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.131 --rc genhtml_branch_coverage=1 00:05:06.131 --rc genhtml_function_coverage=1 00:05:06.131 --rc genhtml_legend=1 00:05:06.131 --rc geninfo_all_blocks=1 00:05:06.131 --rc geninfo_unexecuted_blocks=1 00:05:06.131 00:05:06.131 ' 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.131 --rc genhtml_branch_coverage=1 00:05:06.131 --rc genhtml_function_coverage=1 00:05:06.131 --rc genhtml_legend=1 00:05:06.131 --rc geninfo_all_blocks=1 00:05:06.131 --rc geninfo_unexecuted_blocks=1 00:05:06.131 00:05:06.131 ' 00:05:06.131 14:41:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:06.131 14:41:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58261 00:05:06.131 14:41:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.131 14:41:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58261 00:05:06.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58261 ']' 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.131 14:41:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.131 14:41:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:06.131 [2024-11-17 14:41:51.625821] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:06.131 [2024-11-17 14:41:51.625954] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58261 ] 00:05:06.391 [2024-11-17 14:41:51.786501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.391 [2024-11-17 14:41:51.884031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.391 [2024-11-17 14:41:51.884364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.391 [2024-11-17 14:41:51.884553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.391 [2024-11-17 14:41:51.884574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.023 14:41:52 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.023 14:41:52 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:07.023 14:41:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:07.023 14:41:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.023 14:41:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.023 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.023 POWER: Cannot set governor of lcore 0 to userspace 00:05:07.023 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.023 POWER: Cannot set governor of lcore 0 to performance 00:05:07.023 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.023 POWER: Cannot set governor of lcore 0 to userspace 00:05:07.023 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.023 POWER: Cannot set governor of lcore 0 to userspace 00:05:07.023 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:07.023 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:07.023 POWER: Unable to set Power Management Environment for lcore 0 00:05:07.023 [2024-11-17 14:41:52.461744] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:07.023 [2024-11-17 14:41:52.461762] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:07.023 [2024-11-17 14:41:52.461771] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:07.023 [2024-11-17 14:41:52.461787] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:07.023 [2024-11-17 14:41:52.461794] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:07.023 [2024-11-17 14:41:52.461803] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:07.023 14:41:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.023 14:41:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:07.023 14:41:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.023 14:41:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.288 [2024-11-17 14:41:52.679446] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:07.288 14:41:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.288 14:41:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:07.288 14:41:52 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.288 14:41:52 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.288 14:41:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.288 ************************************ 00:05:07.288 START TEST scheduler_create_thread 00:05:07.288 ************************************ 00:05:07.288 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 2 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 3 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 4 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 5 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 6 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 7 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 8 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 9 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 10 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.289 14:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.858 14:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.858 00:05:07.858 real 0m0.594s 00:05:07.858 user 0m0.012s 00:05:07.858 sys 0m0.007s 00:05:07.858 14:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.858 14:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.858 ************************************ 00:05:07.858 END TEST scheduler_create_thread 00:05:07.858 ************************************ 00:05:07.858 14:41:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:07.858 14:41:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58261 00:05:07.858 14:41:53 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58261 ']' 00:05:07.858 14:41:53 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58261 00:05:07.858 14:41:53 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:07.858 14:41:53 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.858 14:41:53 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58261 00:05:07.858 14:41:53 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:07.858 14:41:53 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:07.858 killing process with pid 58261 00:05:07.858 14:41:53 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58261' 00:05:07.858 14:41:53 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58261 00:05:07.858 14:41:53 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58261 00:05:08.430 [2024-11-17 14:41:53.764161] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:09.001 00:05:09.001 real 0m2.893s 00:05:09.001 user 0m5.483s 00:05:09.001 sys 0m0.334s 00:05:09.001 14:41:54 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.001 14:41:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.001 ************************************ 00:05:09.001 END TEST event_scheduler 00:05:09.001 ************************************ 00:05:09.001 14:41:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:09.001 14:41:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:09.001 14:41:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.001 14:41:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.001 14:41:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.001 ************************************ 00:05:09.001 START TEST app_repeat 00:05:09.001 ************************************ 00:05:09.001 14:41:54 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58345 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.001 Process app_repeat pid: 58345 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58345' 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.001 spdk_app_start Round 0 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58345 /var/tmp/spdk-nbd.sock 00:05:09.001 14:41:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58345 ']' 00:05:09.001 14:41:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.001 14:41:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.001 14:41:54 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:09.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.001 14:41:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.001 14:41:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.001 14:41:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.001 [2024-11-17 14:41:54.401485] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:09.001 [2024-11-17 14:41:54.401592] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58345 ] 00:05:09.262 [2024-11-17 14:41:54.562334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.262 [2024-11-17 14:41:54.657104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.262 [2024-11-17 14:41:54.657199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.832 14:41:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.832 14:41:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:09.832 14:41:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.093 Malloc0 00:05:10.093 14:41:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.353 Malloc1 00:05:10.353 14:41:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.353 14:41:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.613 /dev/nbd0 00:05:10.613 14:41:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.613 14:41:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.613 14:41:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:10.613 14:41:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:10.613 14:41:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:10.613 14:41:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:10.613 14:41:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:10.614 14:41:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:10.614 14:41:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:10.614 14:41:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:10.614 14:41:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.614 1+0 records in 00:05:10.614 1+0 records out 00:05:10.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189423 s, 21.6 MB/s 00:05:10.614 14:41:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.614 14:41:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:10.614 14:41:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.614 14:41:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:10.614 14:41:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:10.614 14:41:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.614 14:41:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.614 14:41:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.873 /dev/nbd1 00:05:10.873 14:41:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.873 14:41:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.873 1+0 records in 00:05:10.873 1+0 records out 00:05:10.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000141724 s, 28.9 MB/s 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:10.873 14:41:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:10.873 14:41:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.873 14:41:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.873 14:41:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.873 14:41:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.873 14:41:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.873 14:41:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.873 { 00:05:10.873 "nbd_device": "/dev/nbd0", 00:05:10.873 "bdev_name": "Malloc0" 00:05:10.873 }, 00:05:10.873 { 00:05:10.873 "nbd_device": "/dev/nbd1", 00:05:10.873 "bdev_name": "Malloc1" 00:05:10.873 } 00:05:10.873 ]' 00:05:10.873 14:41:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.873 { 00:05:10.873 "nbd_device": "/dev/nbd0", 00:05:10.873 "bdev_name": "Malloc0" 00:05:10.873 }, 00:05:10.873 { 00:05:10.873 "nbd_device": "/dev/nbd1", 00:05:10.873 "bdev_name": "Malloc1" 00:05:10.873 } 00:05:10.873 ]' 00:05:10.873 14:41:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.131 /dev/nbd1' 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.131 /dev/nbd1' 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.131 256+0 records in 00:05:11.131 256+0 records out 00:05:11.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00736553 s, 142 MB/s 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.131 256+0 records in 00:05:11.131 256+0 records out 00:05:11.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200273 s, 52.4 MB/s 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.131 256+0 records in 00:05:11.131 256+0 records out 00:05:11.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228876 s, 45.8 MB/s 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.131 14:41:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.390 14:41:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.649 14:41:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.649 14:41:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.649 14:41:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.649 14:41:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.649 14:41:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.649 14:41:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.649 14:41:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.649 14:41:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.649 14:41:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.649 14:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.649 14:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.649 14:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.649 14:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.649 14:41:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.649 14:41:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.649 14:41:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.649 14:41:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.649 14:41:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.907 14:41:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.841 [2024-11-17 14:41:58.017124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.841 [2024-11-17 14:41:58.086371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.841 [2024-11-17 14:41:58.086372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.841 [2024-11-17 14:41:58.189183] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.841 [2024-11-17 14:41:58.189235] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.379 14:42:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.379 spdk_app_start Round 1 00:05:15.379 14:42:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:15.379 14:42:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58345 /var/tmp/spdk-nbd.sock 00:05:15.379 14:42:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58345 ']' 00:05:15.379 14:42:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.379 14:42:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.379 14:42:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.379 14:42:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.379 14:42:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.379 14:42:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.379 14:42:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:15.379 14:42:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.379 Malloc0 00:05:15.379 14:42:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.637 Malloc1 00:05:15.637 14:42:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.637 14:42:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.895 /dev/nbd0 00:05:15.895 14:42:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.895 14:42:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.895 1+0 records in 00:05:15.895 1+0 records out 00:05:15.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223153 s, 18.4 MB/s 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.895 14:42:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.895 14:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.895 14:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.895 14:42:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.154 /dev/nbd1 00:05:16.154 14:42:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.154 14:42:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.154 1+0 records in 00:05:16.154 1+0 records out 00:05:16.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164657 s, 24.9 MB/s 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:16.154 14:42:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:16.154 14:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.154 14:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.154 14:42:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.154 14:42:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.154 14:42:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.420 { 00:05:16.420 "nbd_device": "/dev/nbd0", 00:05:16.420 "bdev_name": "Malloc0" 00:05:16.420 }, 00:05:16.420 { 00:05:16.420 "nbd_device": "/dev/nbd1", 00:05:16.420 "bdev_name": "Malloc1" 00:05:16.420 } 00:05:16.420 ]' 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.420 { 00:05:16.420 "nbd_device": "/dev/nbd0", 00:05:16.420 "bdev_name": "Malloc0" 00:05:16.420 }, 00:05:16.420 { 00:05:16.420 "nbd_device": "/dev/nbd1", 00:05:16.420 "bdev_name": "Malloc1" 00:05:16.420 } 00:05:16.420 ]' 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.420 /dev/nbd1' 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.420 /dev/nbd1' 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.420 256+0 records in 00:05:16.420 256+0 records out 00:05:16.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0083391 s, 126 MB/s 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.420 256+0 records in 00:05:16.420 256+0 records out 00:05:16.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018556 s, 56.5 MB/s 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.420 256+0 records in 00:05:16.420 256+0 records out 00:05:16.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143724 s, 73.0 MB/s 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.420 14:42:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.692 14:42:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.950 14:42:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.950 14:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.950 14:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.950 14:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.950 14:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.950 14:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.950 14:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.950 14:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.950 14:42:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.950 14:42:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.950 14:42:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.950 14:42:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.950 14:42:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.208 14:42:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.774 [2024-11-17 14:42:03.296038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.032 [2024-11-17 14:42:03.370536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.032 [2024-11-17 14:42:03.370548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.032 [2024-11-17 14:42:03.473093] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.032 [2024-11-17 14:42:03.473133] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.560 spdk_app_start Round 2 00:05:20.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.560 14:42:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.560 14:42:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:20.560 14:42:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58345 /var/tmp/spdk-nbd.sock 00:05:20.560 14:42:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58345 ']' 00:05:20.560 14:42:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.560 14:42:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.560 14:42:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.560 14:42:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.560 14:42:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.560 14:42:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.560 14:42:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.560 14:42:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.820 Malloc0 00:05:20.820 14:42:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.078 Malloc1 00:05:21.078 14:42:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.079 14:42:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.336 /dev/nbd0 00:05:21.336 14:42:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.336 14:42:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.336 1+0 records in 00:05:21.336 1+0 records out 00:05:21.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000121135 s, 33.8 MB/s 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.336 14:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.336 14:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.336 14:42:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.336 /dev/nbd1 00:05:21.336 14:42:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.336 14:42:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.336 14:42:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.336 1+0 records in 00:05:21.336 1+0 records out 00:05:21.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260227 s, 15.7 MB/s 00:05:21.594 14:42:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.594 14:42:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.594 14:42:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.594 14:42:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.594 14:42:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.594 14:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.594 14:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.594 14:42:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.594 14:42:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.594 14:42:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:21.594 { 00:05:21.594 "nbd_device": "/dev/nbd0", 00:05:21.594 "bdev_name": "Malloc0" 00:05:21.594 }, 00:05:21.594 { 00:05:21.594 "nbd_device": "/dev/nbd1", 00:05:21.594 "bdev_name": "Malloc1" 00:05:21.594 } 00:05:21.594 ]' 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:21.594 { 00:05:21.594 "nbd_device": "/dev/nbd0", 00:05:21.594 "bdev_name": "Malloc0" 00:05:21.594 }, 00:05:21.594 { 00:05:21.594 "nbd_device": "/dev/nbd1", 00:05:21.594 "bdev_name": "Malloc1" 00:05:21.594 } 00:05:21.594 ]' 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.594 /dev/nbd1' 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.594 /dev/nbd1' 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.594 14:42:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.852 256+0 records in 00:05:21.852 256+0 records out 00:05:21.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107237 s, 97.8 MB/s 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.852 256+0 records in 00:05:21.852 256+0 records out 00:05:21.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125967 s, 83.2 MB/s 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.852 256+0 records in 00:05:21.852 256+0 records out 00:05:21.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167642 s, 62.5 MB/s 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.852 14:42:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.109 14:42:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.366 14:42:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.366 14:42:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.366 14:42:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.366 14:42:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.366 14:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.366 14:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.366 14:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.366 14:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.366 14:42:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.366 14:42:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.366 14:42:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.366 14:42:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.366 14:42:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:22.624 14:42:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.191 [2024-11-17 14:42:08.676571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.450 [2024-11-17 14:42:08.748902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.450 [2024-11-17 14:42:08.748909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.450 [2024-11-17 14:42:08.851203] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.450 [2024-11-17 14:42:08.851250] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.994 14:42:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58345 /var/tmp/spdk-nbd.sock 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58345 ']' 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:25.994 14:42:11 event.app_repeat -- event/event.sh@39 -- # killprocess 58345 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58345 ']' 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58345 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58345 00:05:25.994 killing process with pid 58345 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58345' 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58345 00:05:25.994 14:42:11 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58345 00:05:26.566 spdk_app_start is called in Round 0. 00:05:26.566 Shutdown signal received, stop current app iteration 00:05:26.566 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:26.566 spdk_app_start is called in Round 1. 00:05:26.566 Shutdown signal received, stop current app iteration 00:05:26.566 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:26.566 spdk_app_start is called in Round 2. 00:05:26.566 Shutdown signal received, stop current app iteration 00:05:26.566 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:26.566 spdk_app_start is called in Round 3. 00:05:26.566 Shutdown signal received, stop current app iteration 00:05:26.566 14:42:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:26.566 14:42:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:26.566 00:05:26.566 real 0m17.517s 00:05:26.566 user 0m38.378s 00:05:26.566 sys 0m2.015s 00:05:26.566 14:42:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.566 14:42:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.566 ************************************ 00:05:26.566 END TEST app_repeat 00:05:26.566 ************************************ 00:05:26.566 14:42:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:26.566 14:42:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:26.566 14:42:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.566 14:42:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.566 14:42:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.566 ************************************ 00:05:26.566 START TEST cpu_locks 00:05:26.566 ************************************ 00:05:26.566 14:42:11 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:26.566 * Looking for test storage... 00:05:26.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:26.566 14:42:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.566 14:42:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.566 14:42:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.566 14:42:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.566 14:42:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.567 14:42:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:26.567 14:42:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.567 14:42:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.567 --rc genhtml_branch_coverage=1 00:05:26.567 --rc genhtml_function_coverage=1 00:05:26.567 --rc genhtml_legend=1 00:05:26.567 --rc geninfo_all_blocks=1 00:05:26.567 --rc geninfo_unexecuted_blocks=1 00:05:26.567 00:05:26.567 ' 00:05:26.567 14:42:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.567 --rc genhtml_branch_coverage=1 00:05:26.567 --rc genhtml_function_coverage=1 00:05:26.567 --rc genhtml_legend=1 00:05:26.567 --rc geninfo_all_blocks=1 00:05:26.567 --rc geninfo_unexecuted_blocks=1 00:05:26.567 00:05:26.567 ' 00:05:26.567 14:42:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.567 --rc genhtml_branch_coverage=1 00:05:26.567 --rc genhtml_function_coverage=1 00:05:26.567 --rc genhtml_legend=1 00:05:26.567 --rc geninfo_all_blocks=1 00:05:26.567 --rc geninfo_unexecuted_blocks=1 00:05:26.567 00:05:26.567 ' 00:05:26.567 14:42:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.567 --rc genhtml_branch_coverage=1 00:05:26.567 --rc genhtml_function_coverage=1 00:05:26.567 --rc genhtml_legend=1 00:05:26.567 --rc geninfo_all_blocks=1 00:05:26.567 --rc geninfo_unexecuted_blocks=1 00:05:26.567 00:05:26.567 ' 00:05:26.567 14:42:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:26.567 14:42:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:26.567 14:42:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:26.567 14:42:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:26.567 14:42:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.567 14:42:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.567 14:42:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.567 ************************************ 00:05:26.567 START TEST default_locks 00:05:26.567 ************************************ 00:05:26.567 14:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:26.567 14:42:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58770 00:05:26.567 14:42:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58770 00:05:26.567 14:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58770 ']' 00:05:26.567 14:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.567 14:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.567 14:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.567 14:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.567 14:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.567 14:42:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.827 [2024-11-17 14:42:12.126353] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:26.827 [2024-11-17 14:42:12.126470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58770 ] 00:05:26.827 [2024-11-17 14:42:12.281416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.827 [2024-11-17 14:42:12.357048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.762 14:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.762 14:42:12 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:27.762 14:42:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58770 00:05:27.762 14:42:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58770 00:05:27.762 14:42:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.762 14:42:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58770 00:05:27.762 14:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58770 ']' 00:05:27.762 14:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58770 00:05:27.762 14:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:27.762 14:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.762 14:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58770 00:05:27.762 14:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.762 14:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.762 killing process with pid 58770 00:05:27.762 14:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58770' 00:05:27.762 14:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58770 00:05:27.762 14:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58770 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58770 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58770 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58770 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58770 ']' 00:05:29.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.137 ERROR: process (pid: 58770) is no longer running 00:05:29.137 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58770) - No such process 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.137 00:05:29.137 real 0m2.286s 00:05:29.137 user 0m2.305s 00:05:29.137 sys 0m0.420s 00:05:29.137 ************************************ 00:05:29.137 END TEST default_locks 00:05:29.137 ************************************ 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.137 14:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.137 14:42:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:29.137 14:42:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.137 14:42:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.137 14:42:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.137 ************************************ 00:05:29.137 START TEST default_locks_via_rpc 00:05:29.137 ************************************ 00:05:29.137 14:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:29.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.137 14:42:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58823 00:05:29.137 14:42:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58823 00:05:29.137 14:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58823 ']' 00:05:29.137 14:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.137 14:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.137 14:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.137 14:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.137 14:42:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.137 14:42:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.137 [2024-11-17 14:42:14.449020] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:29.137 [2024-11-17 14:42:14.449221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58823 ] 00:05:29.137 [2024-11-17 14:42:14.600576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.398 [2024-11-17 14:42:14.677892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58823 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58823 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58823 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58823 ']' 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58823 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58823 00:05:29.968 killing process with pid 58823 00:05:29.968 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.969 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.969 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58823' 00:05:29.969 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58823 00:05:29.969 14:42:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58823 00:05:31.353 00:05:31.353 real 0m2.197s 00:05:31.353 user 0m2.180s 00:05:31.353 sys 0m0.394s 00:05:31.353 14:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.353 14:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.353 ************************************ 00:05:31.353 END TEST default_locks_via_rpc 00:05:31.353 ************************************ 00:05:31.353 14:42:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:31.353 14:42:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.353 14:42:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.353 14:42:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.353 ************************************ 00:05:31.353 START TEST non_locking_app_on_locked_coremask 00:05:31.353 ************************************ 00:05:31.353 14:42:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:31.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.353 14:42:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58875 00:05:31.353 14:42:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58875 /var/tmp/spdk.sock 00:05:31.353 14:42:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.353 14:42:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58875 ']' 00:05:31.353 14:42:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.353 14:42:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.353 14:42:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.353 14:42:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.353 14:42:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.353 [2024-11-17 14:42:16.712057] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:31.353 [2024-11-17 14:42:16.712173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58875 ] 00:05:31.353 [2024-11-17 14:42:16.869030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.656 [2024-11-17 14:42:16.952014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.228 14:42:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.228 14:42:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:32.228 14:42:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58891 00:05:32.228 14:42:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58891 /var/tmp/spdk2.sock 00:05:32.228 14:42:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58891 ']' 00:05:32.228 14:42:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.228 14:42:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.228 14:42:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.228 14:42:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:32.228 14:42:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.228 14:42:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.228 [2024-11-17 14:42:17.608962] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:32.228 [2024-11-17 14:42:17.609458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:05:32.228 [2024-11-17 14:42:17.766520] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.228 [2024-11-17 14:42:17.766558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.490 [2024-11-17 14:42:17.925528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.434 14:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.434 14:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:33.434 14:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58875 00:05:33.434 14:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58875 00:05:33.434 14:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.696 14:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58875 00:05:33.696 14:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58875 ']' 00:05:33.696 14:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58875 00:05:33.696 14:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:33.696 14:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.696 14:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58875 00:05:33.696 killing process with pid 58875 00:05:33.696 14:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.696 14:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.696 14:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58875' 00:05:33.696 14:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58875 00:05:33.696 14:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58875 00:05:36.285 14:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58891 00:05:36.285 14:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58891 ']' 00:05:36.285 14:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58891 00:05:36.285 14:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.285 14:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.285 14:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58891 00:05:36.285 killing process with pid 58891 00:05:36.285 14:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.285 14:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.285 14:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58891' 00:05:36.285 14:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58891 00:05:36.285 14:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58891 00:05:37.240 00:05:37.240 real 0m6.109s 00:05:37.240 user 0m6.372s 00:05:37.240 sys 0m0.819s 00:05:37.240 ************************************ 00:05:37.240 END TEST non_locking_app_on_locked_coremask 00:05:37.240 ************************************ 00:05:37.240 14:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.240 14:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.240 14:42:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:37.240 14:42:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.240 14:42:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.240 14:42:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.501 ************************************ 00:05:37.501 START TEST locking_app_on_unlocked_coremask 00:05:37.501 ************************************ 00:05:37.501 14:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:37.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.501 14:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58982 00:05:37.501 14:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58982 /var/tmp/spdk.sock 00:05:37.501 14:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58982 ']' 00:05:37.501 14:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.501 14:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.501 14:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.501 14:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.501 14:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.501 14:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:37.501 [2024-11-17 14:42:22.858676] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:37.501 [2024-11-17 14:42:22.858793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58982 ] 00:05:37.501 [2024-11-17 14:42:23.011528] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.501 [2024-11-17 14:42:23.011569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.761 [2024-11-17 14:42:23.091909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.332 14:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.332 14:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.332 14:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58998 00:05:38.332 14:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58998 /var/tmp/spdk2.sock 00:05:38.332 14:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58998 ']' 00:05:38.332 14:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.332 14:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.332 14:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.332 14:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.332 14:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.332 14:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.332 [2024-11-17 14:42:23.760680] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:38.332 [2024-11-17 14:42:23.761003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58998 ] 00:05:38.593 [2024-11-17 14:42:23.923645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.593 [2024-11-17 14:42:24.083933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.532 14:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.532 14:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.532 14:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58998 00:05:39.532 14:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.532 14:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58998 00:05:39.791 14:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58982 00:05:39.791 14:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58982 ']' 00:05:39.791 14:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58982 00:05:39.791 14:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.791 14:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.791 14:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58982 00:05:39.791 killing process with pid 58982 00:05:39.791 14:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.791 14:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.791 14:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58982' 00:05:39.791 14:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58982 00:05:39.791 14:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58982 00:05:42.332 14:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58998 00:05:42.332 14:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58998 ']' 00:05:42.332 14:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58998 00:05:42.332 14:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.332 14:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.332 14:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58998 00:05:42.332 killing process with pid 58998 00:05:42.332 14:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.332 14:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.332 14:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58998' 00:05:42.332 14:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58998 00:05:42.332 14:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58998 00:05:43.708 ************************************ 00:05:43.708 END TEST locking_app_on_unlocked_coremask 00:05:43.708 00:05:43.708 real 0m6.075s 00:05:43.708 user 0m6.392s 00:05:43.708 sys 0m0.765s 00:05:43.708 14:42:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.708 14:42:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.708 ************************************ 00:05:43.708 14:42:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:43.708 14:42:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.708 14:42:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.708 14:42:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.708 ************************************ 00:05:43.708 START TEST locking_app_on_locked_coremask 00:05:43.708 ************************************ 00:05:43.708 14:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:43.708 14:42:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59089 00:05:43.708 14:42:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59089 /var/tmp/spdk.sock 00:05:43.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.708 14:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59089 ']' 00:05:43.708 14:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.708 14:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.708 14:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.708 14:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.708 14:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.708 14:42:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.708 [2024-11-17 14:42:28.966833] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:43.708 [2024-11-17 14:42:28.966998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59089 ] 00:05:43.708 [2024-11-17 14:42:29.112766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.708 [2024-11-17 14:42:29.192719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59105 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59105 /var/tmp/spdk2.sock 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59105 /var/tmp/spdk2.sock 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59105 /var/tmp/spdk2.sock 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59105 ']' 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.274 14:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.532 [2024-11-17 14:42:29.881841] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:44.532 [2024-11-17 14:42:29.882145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59105 ] 00:05:44.532 [2024-11-17 14:42:30.053036] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59089 has claimed it. 00:05:44.532 [2024-11-17 14:42:30.053097] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.098 ERROR: process (pid: 59105) is no longer running 00:05:45.098 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59105) - No such process 00:05:45.098 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.098 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:45.098 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:45.098 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.098 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.098 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.098 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59089 00:05:45.098 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59089 00:05:45.098 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.356 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59089 00:05:45.356 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59089 ']' 00:05:45.356 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59089 00:05:45.356 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.356 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.356 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59089 00:05:45.356 killing process with pid 59089 00:05:45.356 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.356 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.356 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59089' 00:05:45.356 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59089 00:05:45.356 14:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59089 00:05:46.381 00:05:46.381 real 0m2.983s 00:05:46.381 user 0m3.235s 00:05:46.381 sys 0m0.496s 00:05:46.381 14:42:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.381 ************************************ 00:05:46.381 END TEST locking_app_on_locked_coremask 00:05:46.381 14:42:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.381 ************************************ 00:05:46.639 14:42:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:46.639 14:42:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.639 14:42:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.639 14:42:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.639 ************************************ 00:05:46.639 START TEST locking_overlapped_coremask 00:05:46.639 ************************************ 00:05:46.639 14:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:46.639 14:42:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59158 00:05:46.639 14:42:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59158 /var/tmp/spdk.sock 00:05:46.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.639 14:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59158 ']' 00:05:46.639 14:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.639 14:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.639 14:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.639 14:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.639 14:42:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:46.639 14:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.640 [2024-11-17 14:42:32.023140] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:46.640 [2024-11-17 14:42:32.023253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59158 ] 00:05:46.640 [2024-11-17 14:42:32.179312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.898 [2024-11-17 14:42:32.264503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.898 [2024-11-17 14:42:32.265072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.898 [2024-11-17 14:42:32.265093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59176 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59176 /var/tmp/spdk2.sock 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59176 /var/tmp/spdk2.sock 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59176 /var/tmp/spdk2.sock 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59176 ']' 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.464 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.465 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.465 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.465 14:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.465 [2024-11-17 14:42:32.921202] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:47.465 [2024-11-17 14:42:32.921474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59176 ] 00:05:47.722 [2024-11-17 14:42:33.094982] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59158 has claimed it. 00:05:47.722 [2024-11-17 14:42:33.098953] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.288 ERROR: process (pid: 59176) is no longer running 00:05:48.288 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59176) - No such process 00:05:48.288 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.288 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:48.288 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:48.288 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.288 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.288 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.288 14:42:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:48.288 14:42:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:48.288 14:42:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:48.288 14:42:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:48.288 14:42:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59158 00:05:48.288 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59158 ']' 00:05:48.289 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59158 00:05:48.289 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:48.289 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.289 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59158 00:05:48.289 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.289 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.289 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59158' 00:05:48.289 killing process with pid 59158 00:05:48.289 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59158 00:05:48.289 14:42:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59158 00:05:49.223 00:05:49.223 real 0m2.816s 00:05:49.223 user 0m7.714s 00:05:49.223 sys 0m0.401s 00:05:49.223 14:42:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.223 14:42:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.223 ************************************ 00:05:49.223 END TEST locking_overlapped_coremask 00:05:49.223 ************************************ 00:05:49.481 14:42:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:49.481 14:42:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.481 14:42:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.481 14:42:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.481 ************************************ 00:05:49.481 START TEST locking_overlapped_coremask_via_rpc 00:05:49.481 ************************************ 00:05:49.481 14:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:49.481 14:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59229 00:05:49.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.482 14:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59229 /var/tmp/spdk.sock 00:05:49.482 14:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59229 ']' 00:05:49.482 14:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.482 14:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:49.482 14:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.482 14:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.482 14:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.482 14:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.482 [2024-11-17 14:42:34.890427] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:49.482 [2024-11-17 14:42:34.890695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59229 ] 00:05:49.739 [2024-11-17 14:42:35.046470] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.739 [2024-11-17 14:42:35.046602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.739 [2024-11-17 14:42:35.128489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.739 [2024-11-17 14:42:35.128767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.739 [2024-11-17 14:42:35.128802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.304 14:42:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.304 14:42:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:50.304 14:42:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59247 00:05:50.304 14:42:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59247 /var/tmp/spdk2.sock 00:05:50.304 14:42:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59247 ']' 00:05:50.304 14:42:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:50.304 14:42:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.304 14:42:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.304 14:42:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.304 14:42:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.304 14:42:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.304 [2024-11-17 14:42:35.759254] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:50.304 [2024-11-17 14:42:35.759534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59247 ] 00:05:50.563 [2024-11-17 14:42:35.923848] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.563 [2024-11-17 14:42:35.923888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.563 [2024-11-17 14:42:36.091766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.563 [2024-11-17 14:42:36.094972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.563 [2024-11-17 14:42:36.094977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.498 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.758 [2024-11-17 14:42:37.043029] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59229 has claimed it. 00:05:51.758 request: 00:05:51.758 { 00:05:51.758 "method": "framework_enable_cpumask_locks", 00:05:51.758 "req_id": 1 00:05:51.758 } 00:05:51.758 Got JSON-RPC error response 00:05:51.758 response: 00:05:51.758 { 00:05:51.758 "code": -32603, 00:05:51.758 "message": "Failed to claim CPU core: 2" 00:05:51.758 } 00:05:51.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59229 /var/tmp/spdk.sock 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59229 ']' 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59247 /var/tmp/spdk2.sock 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59247 ']' 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.758 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.018 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.018 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:52.018 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:52.018 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:52.018 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:52.018 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:52.018 00:05:52.018 real 0m2.661s 00:05:52.018 user 0m1.017s 00:05:52.018 sys 0m0.141s 00:05:52.018 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.018 14:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.018 ************************************ 00:05:52.018 END TEST locking_overlapped_coremask_via_rpc 00:05:52.018 ************************************ 00:05:52.018 14:42:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:52.018 14:42:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59229 ]] 00:05:52.018 14:42:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59229 00:05:52.018 14:42:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59229 ']' 00:05:52.018 14:42:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59229 00:05:52.018 14:42:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:52.018 14:42:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.018 14:42:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59229 00:05:52.018 killing process with pid 59229 00:05:52.018 14:42:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.018 14:42:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.018 14:42:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59229' 00:05:52.018 14:42:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59229 00:05:52.018 14:42:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59229 00:05:53.393 14:42:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59247 ]] 00:05:53.393 14:42:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59247 00:05:53.394 14:42:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59247 ']' 00:05:53.394 14:42:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59247 00:05:53.394 14:42:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:53.394 14:42:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.394 14:42:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59247 00:05:53.394 killing process with pid 59247 00:05:53.394 14:42:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:53.394 14:42:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:53.394 14:42:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59247' 00:05:53.394 14:42:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59247 00:05:53.394 14:42:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59247 00:05:54.768 14:42:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:54.768 Process with pid 59229 is not found 00:05:54.768 Process with pid 59247 is not found 00:05:54.768 14:42:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:54.768 14:42:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59229 ]] 00:05:54.768 14:42:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59229 00:05:54.768 14:42:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59229 ']' 00:05:54.768 14:42:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59229 00:05:54.768 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59229) - No such process 00:05:54.768 14:42:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59229 is not found' 00:05:54.768 14:42:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59247 ]] 00:05:54.768 14:42:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59247 00:05:54.768 14:42:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59247 ']' 00:05:54.768 14:42:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59247 00:05:54.768 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59247) - No such process 00:05:54.768 14:42:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59247 is not found' 00:05:54.768 14:42:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:54.768 ************************************ 00:05:54.768 END TEST cpu_locks 00:05:54.768 ************************************ 00:05:54.768 00:05:54.768 real 0m27.967s 00:05:54.768 user 0m48.078s 00:05:54.768 sys 0m4.208s 00:05:54.768 14:42:39 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.768 14:42:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.768 ************************************ 00:05:54.768 END TEST event 00:05:54.768 ************************************ 00:05:54.768 00:05:54.768 real 0m52.921s 00:05:54.768 user 1m38.731s 00:05:54.768 sys 0m6.955s 00:05:54.768 14:42:39 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.768 14:42:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.768 14:42:39 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:54.768 14:42:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.768 14:42:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.768 14:42:39 -- common/autotest_common.sh@10 -- # set +x 00:05:54.768 ************************************ 00:05:54.768 START TEST thread 00:05:54.768 ************************************ 00:05:54.768 14:42:39 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:54.768 * Looking for test storage... 00:05:54.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:54.768 14:42:40 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.768 14:42:40 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.768 14:42:40 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.768 14:42:40 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.768 14:42:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.768 14:42:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.768 14:42:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.768 14:42:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.768 14:42:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.768 14:42:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.768 14:42:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.768 14:42:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.768 14:42:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.768 14:42:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.768 14:42:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.768 14:42:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:54.768 14:42:40 thread -- scripts/common.sh@345 -- # : 1 00:05:54.768 14:42:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.768 14:42:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.768 14:42:40 thread -- scripts/common.sh@365 -- # decimal 1 00:05:54.768 14:42:40 thread -- scripts/common.sh@353 -- # local d=1 00:05:54.768 14:42:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.768 14:42:40 thread -- scripts/common.sh@355 -- # echo 1 00:05:54.768 14:42:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.768 14:42:40 thread -- scripts/common.sh@366 -- # decimal 2 00:05:54.768 14:42:40 thread -- scripts/common.sh@353 -- # local d=2 00:05:54.768 14:42:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.768 14:42:40 thread -- scripts/common.sh@355 -- # echo 2 00:05:54.768 14:42:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.768 14:42:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.768 14:42:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.768 14:42:40 thread -- scripts/common.sh@368 -- # return 0 00:05:54.768 14:42:40 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.768 14:42:40 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.768 --rc genhtml_branch_coverage=1 00:05:54.768 --rc genhtml_function_coverage=1 00:05:54.768 --rc genhtml_legend=1 00:05:54.768 --rc geninfo_all_blocks=1 00:05:54.768 --rc geninfo_unexecuted_blocks=1 00:05:54.768 00:05:54.768 ' 00:05:54.768 14:42:40 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.769 --rc genhtml_branch_coverage=1 00:05:54.769 --rc genhtml_function_coverage=1 00:05:54.769 --rc genhtml_legend=1 00:05:54.769 --rc geninfo_all_blocks=1 00:05:54.769 --rc geninfo_unexecuted_blocks=1 00:05:54.769 00:05:54.769 ' 00:05:54.769 14:42:40 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.769 --rc genhtml_branch_coverage=1 00:05:54.769 --rc genhtml_function_coverage=1 00:05:54.769 --rc genhtml_legend=1 00:05:54.769 --rc geninfo_all_blocks=1 00:05:54.769 --rc geninfo_unexecuted_blocks=1 00:05:54.769 00:05:54.769 ' 00:05:54.769 14:42:40 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.769 --rc genhtml_branch_coverage=1 00:05:54.769 --rc genhtml_function_coverage=1 00:05:54.769 --rc genhtml_legend=1 00:05:54.769 --rc geninfo_all_blocks=1 00:05:54.769 --rc geninfo_unexecuted_blocks=1 00:05:54.769 00:05:54.769 ' 00:05:54.769 14:42:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.769 14:42:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:54.769 14:42:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.769 14:42:40 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.769 ************************************ 00:05:54.769 START TEST thread_poller_perf 00:05:54.769 ************************************ 00:05:54.769 14:42:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.769 [2024-11-17 14:42:40.138317] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:54.769 [2024-11-17 14:42:40.138427] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59396 ] 00:05:54.769 [2024-11-17 14:42:40.293356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.027 [2024-11-17 14:42:40.371084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.027 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:55.963 [2024-11-17T14:42:41.506Z] ====================================== 00:05:55.963 [2024-11-17T14:42:41.506Z] busy:2606665672 (cyc) 00:05:55.963 [2024-11-17T14:42:41.506Z] total_run_count: 400000 00:05:55.963 [2024-11-17T14:42:41.506Z] tsc_hz: 2600000000 (cyc) 00:05:55.963 [2024-11-17T14:42:41.506Z] ====================================== 00:05:55.963 [2024-11-17T14:42:41.506Z] poller_cost: 6516 (cyc), 2506 (nsec) 00:05:55.963 00:05:55.963 real 0m1.390s 00:05:55.963 user 0m1.223s 00:05:55.963 sys 0m0.061s 00:05:55.963 14:42:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.963 ************************************ 00:05:55.963 END TEST thread_poller_perf 00:05:55.963 ************************************ 00:05:55.963 14:42:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.244 14:42:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:56.244 14:42:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:56.244 14:42:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.244 14:42:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.244 ************************************ 00:05:56.244 START TEST thread_poller_perf 00:05:56.244 ************************************ 00:05:56.244 14:42:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:56.244 [2024-11-17 14:42:41.572343] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:56.244 [2024-11-17 14:42:41.572606] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59438 ] 00:05:56.244 [2024-11-17 14:42:41.728652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.502 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:56.502 [2024-11-17 14:42:41.805338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.438 [2024-11-17T14:42:42.981Z] ====================================== 00:05:57.438 [2024-11-17T14:42:42.981Z] busy:2602411218 (cyc) 00:05:57.438 [2024-11-17T14:42:42.981Z] total_run_count: 5116000 00:05:57.438 [2024-11-17T14:42:42.981Z] tsc_hz: 2600000000 (cyc) 00:05:57.438 [2024-11-17T14:42:42.981Z] ====================================== 00:05:57.438 [2024-11-17T14:42:42.981Z] poller_cost: 508 (cyc), 195 (nsec) 00:05:57.438 ************************************ 00:05:57.438 END TEST thread_poller_perf 00:05:57.438 ************************************ 00:05:57.438 00:05:57.438 real 0m1.375s 00:05:57.438 user 0m1.214s 00:05:57.438 sys 0m0.055s 00:05:57.438 14:42:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.438 14:42:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.438 14:42:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:57.438 ************************************ 00:05:57.438 END TEST thread 00:05:57.438 ************************************ 00:05:57.438 00:05:57.438 real 0m2.995s 00:05:57.438 user 0m2.534s 00:05:57.438 sys 0m0.235s 00:05:57.438 14:42:42 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.438 14:42:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.698 14:42:42 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:57.698 14:42:42 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:57.698 14:42:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.698 14:42:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.698 14:42:42 -- common/autotest_common.sh@10 -- # set +x 00:05:57.698 ************************************ 00:05:57.698 START TEST app_cmdline 00:05:57.698 ************************************ 00:05:57.698 14:42:42 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:57.698 * Looking for test storage... 00:05:57.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.698 14:42:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.698 --rc genhtml_branch_coverage=1 00:05:57.698 --rc genhtml_function_coverage=1 00:05:57.698 --rc genhtml_legend=1 00:05:57.698 --rc geninfo_all_blocks=1 00:05:57.698 --rc geninfo_unexecuted_blocks=1 00:05:57.698 00:05:57.698 ' 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.698 --rc genhtml_branch_coverage=1 00:05:57.698 --rc genhtml_function_coverage=1 00:05:57.698 --rc genhtml_legend=1 00:05:57.698 --rc geninfo_all_blocks=1 00:05:57.698 --rc geninfo_unexecuted_blocks=1 00:05:57.698 00:05:57.698 ' 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.698 --rc genhtml_branch_coverage=1 00:05:57.698 --rc genhtml_function_coverage=1 00:05:57.698 --rc genhtml_legend=1 00:05:57.698 --rc geninfo_all_blocks=1 00:05:57.698 --rc geninfo_unexecuted_blocks=1 00:05:57.698 00:05:57.698 ' 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.698 --rc genhtml_branch_coverage=1 00:05:57.698 --rc genhtml_function_coverage=1 00:05:57.698 --rc genhtml_legend=1 00:05:57.698 --rc geninfo_all_blocks=1 00:05:57.698 --rc geninfo_unexecuted_blocks=1 00:05:57.698 00:05:57.698 ' 00:05:57.698 14:42:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:57.698 14:42:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59516 00:05:57.698 14:42:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59516 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59516 ']' 00:05:57.698 14:42:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.698 14:42:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.698 [2024-11-17 14:42:43.193660] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:57.698 [2024-11-17 14:42:43.193779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59516 ] 00:05:57.957 [2024-11-17 14:42:43.352504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.957 [2024-11-17 14:42:43.450564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.524 14:42:44 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.524 14:42:44 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:58.524 14:42:44 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:58.782 { 00:05:58.782 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:05:58.782 "fields": { 00:05:58.782 "major": 25, 00:05:58.782 "minor": 1, 00:05:58.782 "patch": 0, 00:05:58.782 "suffix": "-pre", 00:05:58.782 "commit": "83e8405e4" 00:05:58.782 } 00:05:58.782 } 00:05:58.782 14:42:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:58.782 14:42:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:58.782 14:42:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:58.782 14:42:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:58.782 14:42:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:58.782 14:42:44 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.782 14:42:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:58.782 14:42:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:58.782 14:42:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:58.782 14:42:44 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.782 14:42:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:58.782 14:42:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:58.782 14:42:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:58.782 14:42:44 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:58.782 14:42:44 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:58.782 14:42:44 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:58.782 14:42:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.782 14:42:44 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:58.782 14:42:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.783 14:42:44 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:58.783 14:42:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.783 14:42:44 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:58.783 14:42:44 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:58.783 14:42:44 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:59.041 request: 00:05:59.041 { 00:05:59.041 "method": "env_dpdk_get_mem_stats", 00:05:59.041 "req_id": 1 00:05:59.041 } 00:05:59.041 Got JSON-RPC error response 00:05:59.041 response: 00:05:59.041 { 00:05:59.041 "code": -32601, 00:05:59.041 "message": "Method not found" 00:05:59.041 } 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.041 14:42:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59516 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59516 ']' 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59516 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59516 00:05:59.041 killing process with pid 59516 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59516' 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@973 -- # kill 59516 00:05:59.041 14:42:44 app_cmdline -- common/autotest_common.sh@978 -- # wait 59516 00:06:00.946 00:06:00.946 real 0m2.998s 00:06:00.946 user 0m3.266s 00:06:00.946 sys 0m0.425s 00:06:00.946 14:42:45 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.946 ************************************ 00:06:00.946 END TEST app_cmdline 00:06:00.946 ************************************ 00:06:00.946 14:42:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.946 14:42:46 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:00.946 14:42:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.946 14:42:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.946 14:42:46 -- common/autotest_common.sh@10 -- # set +x 00:06:00.946 ************************************ 00:06:00.946 START TEST version 00:06:00.946 ************************************ 00:06:00.946 14:42:46 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:00.946 * Looking for test storage... 00:06:00.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:00.946 14:42:46 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.946 14:42:46 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.946 14:42:46 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.946 14:42:46 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.946 14:42:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.946 14:42:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.946 14:42:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.946 14:42:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.946 14:42:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.946 14:42:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.946 14:42:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.946 14:42:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.946 14:42:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.946 14:42:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.946 14:42:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.946 14:42:46 version -- scripts/common.sh@344 -- # case "$op" in 00:06:00.946 14:42:46 version -- scripts/common.sh@345 -- # : 1 00:06:00.946 14:42:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.946 14:42:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.946 14:42:46 version -- scripts/common.sh@365 -- # decimal 1 00:06:00.946 14:42:46 version -- scripts/common.sh@353 -- # local d=1 00:06:00.946 14:42:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.946 14:42:46 version -- scripts/common.sh@355 -- # echo 1 00:06:00.946 14:42:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.946 14:42:46 version -- scripts/common.sh@366 -- # decimal 2 00:06:00.946 14:42:46 version -- scripts/common.sh@353 -- # local d=2 00:06:00.946 14:42:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.946 14:42:46 version -- scripts/common.sh@355 -- # echo 2 00:06:00.946 14:42:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.946 14:42:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.946 14:42:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.946 14:42:46 version -- scripts/common.sh@368 -- # return 0 00:06:00.946 14:42:46 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.946 14:42:46 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.946 --rc genhtml_branch_coverage=1 00:06:00.946 --rc genhtml_function_coverage=1 00:06:00.946 --rc genhtml_legend=1 00:06:00.946 --rc geninfo_all_blocks=1 00:06:00.946 --rc geninfo_unexecuted_blocks=1 00:06:00.946 00:06:00.946 ' 00:06:00.946 14:42:46 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.946 --rc genhtml_branch_coverage=1 00:06:00.946 --rc genhtml_function_coverage=1 00:06:00.946 --rc genhtml_legend=1 00:06:00.946 --rc geninfo_all_blocks=1 00:06:00.946 --rc geninfo_unexecuted_blocks=1 00:06:00.946 00:06:00.946 ' 00:06:00.946 14:42:46 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.946 --rc genhtml_branch_coverage=1 00:06:00.946 --rc genhtml_function_coverage=1 00:06:00.946 --rc genhtml_legend=1 00:06:00.946 --rc geninfo_all_blocks=1 00:06:00.946 --rc geninfo_unexecuted_blocks=1 00:06:00.946 00:06:00.946 ' 00:06:00.946 14:42:46 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.946 --rc genhtml_branch_coverage=1 00:06:00.946 --rc genhtml_function_coverage=1 00:06:00.946 --rc genhtml_legend=1 00:06:00.946 --rc geninfo_all_blocks=1 00:06:00.946 --rc geninfo_unexecuted_blocks=1 00:06:00.946 00:06:00.946 ' 00:06:00.946 14:42:46 version -- app/version.sh@17 -- # get_header_version major 00:06:00.946 14:42:46 version -- app/version.sh@14 -- # cut -f2 00:06:00.946 14:42:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:00.946 14:42:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:00.946 14:42:46 version -- app/version.sh@17 -- # major=25 00:06:00.946 14:42:46 version -- app/version.sh@18 -- # get_header_version minor 00:06:00.946 14:42:46 version -- app/version.sh@14 -- # cut -f2 00:06:00.946 14:42:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:00.946 14:42:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:00.946 14:42:46 version -- app/version.sh@18 -- # minor=1 00:06:00.946 14:42:46 version -- app/version.sh@19 -- # get_header_version patch 00:06:00.946 14:42:46 version -- app/version.sh@14 -- # cut -f2 00:06:00.946 14:42:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:00.946 14:42:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:00.946 14:42:46 version -- app/version.sh@19 -- # patch=0 00:06:00.946 14:42:46 version -- app/version.sh@20 -- # get_header_version suffix 00:06:00.946 14:42:46 version -- app/version.sh@14 -- # cut -f2 00:06:00.946 14:42:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:00.946 14:42:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:00.946 14:42:46 version -- app/version.sh@20 -- # suffix=-pre 00:06:00.946 14:42:46 version -- app/version.sh@22 -- # version=25.1 00:06:00.946 14:42:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:00.946 14:42:46 version -- app/version.sh@28 -- # version=25.1rc0 00:06:00.946 14:42:46 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:00.946 14:42:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:00.946 14:42:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:00.946 14:42:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:00.946 00:06:00.946 real 0m0.176s 00:06:00.946 user 0m0.112s 00:06:00.946 sys 0m0.084s 00:06:00.946 ************************************ 00:06:00.946 END TEST version 00:06:00.946 ************************************ 00:06:00.946 14:42:46 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.946 14:42:46 version -- common/autotest_common.sh@10 -- # set +x 00:06:00.946 14:42:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:00.946 14:42:46 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:00.946 14:42:46 -- spdk/autotest.sh@194 -- # uname -s 00:06:00.946 14:42:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:00.946 14:42:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:00.946 14:42:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:00.946 14:42:46 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:00.946 14:42:46 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:00.946 14:42:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:00.946 14:42:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.946 14:42:46 -- common/autotest_common.sh@10 -- # set +x 00:06:00.946 ************************************ 00:06:00.946 START TEST blockdev_nvme 00:06:00.946 ************************************ 00:06:00.946 14:42:46 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:00.946 * Looking for test storage... 00:06:00.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:00.946 14:42:46 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.946 14:42:46 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.946 14:42:46 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.946 14:42:46 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.946 14:42:46 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:00.947 14:42:46 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:00.947 14:42:46 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.947 14:42:46 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:00.947 14:42:46 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.947 14:42:46 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:00.947 14:42:46 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:00.947 14:42:46 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.947 14:42:46 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:00.947 14:42:46 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.947 14:42:46 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.947 14:42:46 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.947 14:42:46 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:00.947 14:42:46 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.947 14:42:46 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.947 --rc genhtml_branch_coverage=1 00:06:00.947 --rc genhtml_function_coverage=1 00:06:00.947 --rc genhtml_legend=1 00:06:00.947 --rc geninfo_all_blocks=1 00:06:00.947 --rc geninfo_unexecuted_blocks=1 00:06:00.947 00:06:00.947 ' 00:06:00.947 14:42:46 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.947 --rc genhtml_branch_coverage=1 00:06:00.947 --rc genhtml_function_coverage=1 00:06:00.947 --rc genhtml_legend=1 00:06:00.947 --rc geninfo_all_blocks=1 00:06:00.947 --rc geninfo_unexecuted_blocks=1 00:06:00.947 00:06:00.947 ' 00:06:00.947 14:42:46 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.947 --rc genhtml_branch_coverage=1 00:06:00.947 --rc genhtml_function_coverage=1 00:06:00.947 --rc genhtml_legend=1 00:06:00.947 --rc geninfo_all_blocks=1 00:06:00.947 --rc geninfo_unexecuted_blocks=1 00:06:00.947 00:06:00.947 ' 00:06:00.947 14:42:46 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.947 --rc genhtml_branch_coverage=1 00:06:00.947 --rc genhtml_function_coverage=1 00:06:00.947 --rc genhtml_legend=1 00:06:00.947 --rc geninfo_all_blocks=1 00:06:00.947 --rc geninfo_unexecuted_blocks=1 00:06:00.947 00:06:00.947 ' 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:00.947 14:42:46 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59694 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59694 00:06:00.947 14:42:46 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:00.947 14:42:46 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59694 ']' 00:06:00.947 14:42:46 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.947 14:42:46 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.947 14:42:46 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.947 14:42:46 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.947 14:42:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:00.947 [2024-11-17 14:42:46.466728] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:00.947 [2024-11-17 14:42:46.467009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59694 ] 00:06:01.205 [2024-11-17 14:42:46.632265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.205 [2024-11-17 14:42:46.741596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.149 14:42:47 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.149 14:42:47 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:02.149 14:42:47 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:02.149 14:42:47 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:02.149 14:42:47 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:02.149 14:42:47 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:02.149 14:42:47 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:02.149 14:42:47 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:02.149 14:42:47 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.149 14:42:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "95a20033-c3c4-403b-8325-07986846b3b6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "95a20033-c3c4-403b-8325-07986846b3b6",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "3c0160dd-f4b4-4bbe-be5e-8dd414a11336"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3c0160dd-f4b4-4bbe-be5e-8dd414a11336",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "60504152-8eb3-49d0-b24a-662dc5437210"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "60504152-8eb3-49d0-b24a-662dc5437210",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "3e026757-4ac4-4ed2-b861-179a4390ff52"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3e026757-4ac4-4ed2-b861-179a4390ff52",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c6175664-68e9-42b5-9538-3af2998e1512"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c6175664-68e9-42b5-9538-3af2998e1512",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1ef054c4-5229-4a93-9f6a-7f0e7e7a148c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1ef054c4-5229-4a93-9f6a-7f0e7e7a148c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:02.412 14:42:47 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59694 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59694 ']' 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59694 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59694 00:06:02.412 killing process with pid 59694 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59694' 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59694 00:06:02.412 14:42:47 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59694 00:06:04.325 14:42:49 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:04.325 14:42:49 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:04.325 14:42:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:04.326 14:42:49 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.326 14:42:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.326 ************************************ 00:06:04.326 START TEST bdev_hello_world 00:06:04.326 ************************************ 00:06:04.326 14:42:49 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:04.326 [2024-11-17 14:42:49.606038] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:04.326 [2024-11-17 14:42:49.606182] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59778 ] 00:06:04.326 [2024-11-17 14:42:49.768942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.586 [2024-11-17 14:42:49.894413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.156 [2024-11-17 14:42:50.485040] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:05.157 [2024-11-17 14:42:50.485101] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:05.157 [2024-11-17 14:42:50.485128] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:05.157 [2024-11-17 14:42:50.487886] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:05.157 [2024-11-17 14:42:50.489000] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:05.157 [2024-11-17 14:42:50.489039] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:05.157 [2024-11-17 14:42:50.489521] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:05.157 00:06:05.157 [2024-11-17 14:42:50.489555] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:05.729 00:06:05.729 real 0m1.540s 00:06:05.729 user 0m1.199s 00:06:05.729 sys 0m0.231s 00:06:05.729 14:42:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.729 ************************************ 00:06:05.729 END TEST bdev_hello_world 00:06:05.729 ************************************ 00:06:05.729 14:42:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:05.729 14:42:51 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:05.729 14:42:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:05.729 14:42:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.729 14:42:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.729 ************************************ 00:06:05.729 START TEST bdev_bounds 00:06:05.729 ************************************ 00:06:05.729 14:42:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:05.729 Process bdevio pid: 59814 00:06:05.729 14:42:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59814 00:06:05.729 14:42:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.729 14:42:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59814' 00:06:05.729 14:42:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59814 00:06:05.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.729 14:42:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59814 ']' 00:06:05.729 14:42:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.730 14:42:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:05.730 14:42:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.730 14:42:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.730 14:42:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.730 14:42:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:05.730 [2024-11-17 14:42:51.205517] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:05.730 [2024-11-17 14:42:51.205645] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59814 ] 00:06:05.989 [2024-11-17 14:42:51.363461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.989 [2024-11-17 14:42:51.452570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.989 [2024-11-17 14:42:51.452886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.989 [2024-11-17 14:42:51.452904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.564 14:42:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.564 14:42:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:06.564 14:42:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:06.833 I/O targets: 00:06:06.833 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:06.833 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:06.833 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:06.833 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:06.833 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:06.833 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:06.833 00:06:06.833 00:06:06.833 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.833 http://cunit.sourceforge.net/ 00:06:06.833 00:06:06.833 00:06:06.833 Suite: bdevio tests on: Nvme3n1 00:06:06.833 Test: blockdev write read block ...passed 00:06:06.833 Test: blockdev write zeroes read block ...passed 00:06:06.833 Test: blockdev write zeroes read no split ...passed 00:06:06.833 Test: blockdev write zeroes read split ...passed 00:06:06.833 Test: blockdev write zeroes read split partial ...passed 00:06:06.833 Test: blockdev reset ...[2024-11-17 14:42:52.180777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:06.833 [2024-11-17 14:42:52.183528] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:06.833 passed 00:06:06.833 Test: blockdev write read 8 blocks ...passed 00:06:06.833 Test: blockdev write read size > 128k ...passed 00:06:06.833 Test: blockdev write read invalid size ...passed 00:06:06.833 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.833 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.833 Test: blockdev write read max offset ...passed 00:06:06.833 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.833 Test: blockdev writev readv 8 blocks ...passed 00:06:06.833 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.833 Test: blockdev writev readv block ...passed 00:06:06.833 Test: blockdev writev readv size > 128k ...passed 00:06:06.833 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.833 Test: blockdev comparev and writev ...[2024-11-17 14:42:52.189956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7e0a000 len:0x1000 00:06:06.833 [2024-11-17 14:42:52.190099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.833 passed 00:06:06.833 Test: blockdev nvme passthru rw ...passed 00:06:06.833 Test: blockdev nvme passthru vendor specific ...[2024-11-17 14:42:52.190675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:06.833 Test: blockdev nvme admin passthru ...passed 00:06:06.833 Test: blockdev copy ...RP2 0x0 00:06:06.833 [2024-11-17 14:42:52.190761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:06.833 passed 00:06:06.833 Suite: bdevio tests on: Nvme2n3 00:06:06.833 Test: blockdev write read block ...passed 00:06:06.833 Test: blockdev write zeroes read block ...passed 00:06:06.833 Test: blockdev write zeroes read no split ...passed 00:06:06.833 Test: blockdev write zeroes read split ...passed 00:06:06.833 Test: blockdev write zeroes read split partial ...passed 00:06:06.833 Test: blockdev reset ...[2024-11-17 14:42:52.239673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:06.833 [2024-11-17 14:42:52.242394] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:06:06.833 00:06:06.833 Test: blockdev write read 8 blocks ...passed 00:06:06.833 Test: blockdev write read size > 128k ...passed 00:06:06.833 Test: blockdev write read invalid size ...passed 00:06:06.833 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.833 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.833 Test: blockdev write read max offset ...passed 00:06:06.833 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.833 Test: blockdev writev readv 8 blocks ...passed 00:06:06.833 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.833 Test: blockdev writev readv block ...passed 00:06:06.833 Test: blockdev writev readv size > 128k ...passed 00:06:06.833 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.833 Test: blockdev comparev and writev ...[2024-11-17 14:42:52.249138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29b006000 len:0x1000 00:06:06.833 [2024-11-17 14:42:52.249174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.833 passed 00:06:06.833 Test: blockdev nvme passthru rw ...passed 00:06:06.833 Test: blockdev nvme passthru vendor specific ...[2024-11-17 14:42:52.249652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:06.833 passed 00:06:06.833 Test: blockdev nvme admin passthru ...[2024-11-17 14:42:52.249670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:06.833 passed 00:06:06.833 Test: blockdev copy ...passed 00:06:06.833 Suite: bdevio tests on: Nvme2n2 00:06:06.833 Test: blockdev write read block ...passed 00:06:06.833 Test: blockdev write zeroes read block ...passed 00:06:06.833 Test: blockdev write zeroes read no split ...passed 00:06:06.833 Test: blockdev write zeroes read split ...passed 00:06:06.833 Test: blockdev write zeroes read split partial ...passed 00:06:06.833 Test: blockdev reset ...[2024-11-17 14:42:52.289821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:06.833 [2024-11-17 14:42:52.292414] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:06.833 passed 00:06:06.833 Test: blockdev write read 8 blocks ...passed 00:06:06.833 Test: blockdev write read size > 128k ...passed 00:06:06.833 Test: blockdev write read invalid size ...passed 00:06:06.833 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.833 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.833 Test: blockdev write read max offset ...passed 00:06:06.833 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.833 Test: blockdev writev readv 8 blocks ...passed 00:06:06.833 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.833 Test: blockdev writev readv block ...passed 00:06:06.833 Test: blockdev writev readv size > 128k ...passed 00:06:06.833 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.833 Test: blockdev comparev and writev ...[2024-11-17 14:42:52.298287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:06:06.833 Test: blockdev nvme passthru rw ...passed 00:06:06.833 Test: blockdev nvme passthru vendor specific ...passed 00:06:06.833 Test: blockdev nvme admin passthru ...passed 00:06:06.833 Test: blockdev copy ...SGL DATA BLOCK ADDRESS 0x2d363c000 len:0x1000 00:06:06.833 [2024-11-17 14:42:52.298391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.833 [2024-11-17 14:42:52.298772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:06.833 [2024-11-17 14:42:52.298793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:06.833 passed 00:06:06.833 Suite: bdevio tests on: Nvme2n1 00:06:06.833 Test: blockdev write read block ...passed 00:06:06.833 Test: blockdev write zeroes read block ...passed 00:06:06.833 Test: blockdev write zeroes read no split ...passed 00:06:06.833 Test: blockdev write zeroes read split ...passed 00:06:06.833 Test: blockdev write zeroes read split partial ...passed 00:06:06.833 Test: blockdev reset ...[2024-11-17 14:42:52.339691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:06.833 passed 00:06:06.833 Test: blockdev write read 8 blocks ...[2024-11-17 14:42:52.342236] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:06.833 passed 00:06:06.833 Test: blockdev write read size > 128k ...passed 00:06:06.833 Test: blockdev write read invalid size ...passed 00:06:06.833 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.833 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.833 Test: blockdev write read max offset ...passed 00:06:06.833 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.833 Test: blockdev writev readv 8 blocks ...passed 00:06:06.833 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.833 Test: blockdev writev readv block ...passed 00:06:06.833 Test: blockdev writev readv size > 128k ...passed 00:06:06.833 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.833 Test: blockdev comparev and writev ...[2024-11-17 14:42:52.348257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3638000 len:0x1000 00:06:06.833 [2024-11-17 14:42:52.348292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.833 passed 00:06:06.833 Test: blockdev nvme passthru rw ...passed 00:06:06.833 Test: blockdev nvme passthru vendor specific ...[2024-11-17 14:42:52.348803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:06.833 [2024-11-17 14:42:52.348820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:06.833 passed 00:06:06.833 Test: blockdev nvme admin passthru ...passed 00:06:06.833 Test: blockdev copy ...passed 00:06:06.833 Suite: bdevio tests on: Nvme1n1 00:06:06.833 Test: blockdev write read block ...passed 00:06:06.833 Test: blockdev write zeroes read block ...passed 00:06:06.833 Test: blockdev write zeroes read no split ...passed 00:06:06.833 Test: blockdev write zeroes read split ...passed 00:06:07.092 Test: blockdev write zeroes read split partial ...passed 00:06:07.092 Test: blockdev reset ...[2024-11-17 14:42:52.390324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:07.092 [2024-11-17 14:42:52.392733] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:07.092 passed 00:06:07.092 Test: blockdev write read 8 blocks ...passed 00:06:07.092 Test: blockdev write read size > 128k ...passed 00:06:07.092 Test: blockdev write read invalid size ...passed 00:06:07.092 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:07.092 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:07.092 Test: blockdev write read max offset ...passed 00:06:07.092 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:07.092 Test: blockdev writev readv 8 blocks ...passed 00:06:07.092 Test: blockdev writev readv 30 x 1block ...passed 00:06:07.092 Test: blockdev writev readv block ...passed 00:06:07.092 Test: blockdev writev readv size > 128k ...passed 00:06:07.092 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:07.092 Test: blockdev comparev and writev ...[2024-11-17 14:42:52.398793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3634000 len:0x1000 00:06:07.092 [2024-11-17 14:42:52.398907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:07.092 passed 00:06:07.092 Test: blockdev nvme passthru rw ...passed 00:06:07.092 Test: blockdev nvme passthru vendor specific ...[2024-11-17 14:42:52.400100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:07.092 [2024-11-17 14:42:52.400191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:07.092 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:07.092 passed 00:06:07.092 Test: blockdev copy ...passed 00:06:07.092 Suite: bdevio tests on: Nvme0n1 00:06:07.092 Test: blockdev write read block ...passed 00:06:07.092 Test: blockdev write zeroes read block ...passed 00:06:07.092 Test: blockdev write zeroes read no split ...passed 00:06:07.092 Test: blockdev write zeroes read split ...passed 00:06:07.092 Test: blockdev write zeroes read split partial ...passed 00:06:07.092 Test: blockdev reset ...[2024-11-17 14:42:52.463368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:07.092 [2024-11-17 14:42:52.465910] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:07.092 Test: blockdev write read 8 blocks ...uccessful. 00:06:07.092 passed 00:06:07.092 Test: blockdev write read size > 128k ...passed 00:06:07.092 Test: blockdev write read invalid size ...passed 00:06:07.092 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:07.092 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:07.092 Test: blockdev write read max offset ...passed 00:06:07.092 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:07.092 Test: blockdev writev readv 8 blocks ...passed 00:06:07.092 Test: blockdev writev readv 30 x 1block ...passed 00:06:07.092 Test: blockdev writev readv block ...passed 00:06:07.092 Test: blockdev writev readv size > 128k ...passed 00:06:07.092 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:07.092 Test: blockdev comparev and writev ...passed 00:06:07.092 Test: blockdev nvme passthru rw ...[2024-11-17 14:42:52.472341] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:07.092 separate metadata which is not supported yet. 00:06:07.092 passed 00:06:07.092 Test: blockdev nvme passthru vendor specific ...passed 00:06:07.092 Test: blockdev nvme admin passthru ...[2024-11-17 14:42:52.472695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:07.092 [2024-11-17 14:42:52.472727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:07.092 passed 00:06:07.093 Test: blockdev copy ...passed 00:06:07.093 00:06:07.093 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.093 suites 6 6 n/a 0 0 00:06:07.093 tests 138 138 138 0 0 00:06:07.093 asserts 893 893 893 0 n/a 00:06:07.093 00:06:07.093 Elapsed time = 0.915 seconds 00:06:07.093 0 00:06:07.093 14:42:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59814 00:06:07.093 14:42:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59814 ']' 00:06:07.093 14:42:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59814 00:06:07.093 14:42:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:07.093 14:42:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.093 14:42:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59814 00:06:07.093 14:42:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.093 14:42:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.093 14:42:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59814' 00:06:07.093 killing process with pid 59814 00:06:07.093 14:42:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59814 00:06:07.093 14:42:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59814 00:06:07.659 14:42:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:07.659 00:06:07.659 real 0m1.888s 00:06:07.659 user 0m4.861s 00:06:07.659 sys 0m0.255s 00:06:07.659 14:42:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.659 14:42:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:07.659 ************************************ 00:06:07.659 END TEST bdev_bounds 00:06:07.659 ************************************ 00:06:07.659 14:42:53 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:07.659 14:42:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:07.659 14:42:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.659 14:42:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:07.659 ************************************ 00:06:07.659 START TEST bdev_nbd 00:06:07.659 ************************************ 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59868 00:06:07.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59868 /var/tmp/spdk-nbd.sock 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59868 ']' 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.659 14:42:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.660 14:42:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:07.660 [2024-11-17 14:42:53.154672] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:07.660 [2024-11-17 14:42:53.154787] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.918 [2024-11-17 14:42:53.310831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.918 [2024-11-17 14:42:53.388612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.486 14:42:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.486 14:42:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:08.486 14:42:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:08.486 14:42:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.486 14:42:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:08.486 14:42:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:08.486 14:42:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:08.486 14:42:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.486 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:08.486 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:08.486 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:08.486 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:08.486 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:08.486 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:08.486 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.745 1+0 records in 00:06:08.745 1+0 records out 00:06:08.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509533 s, 8.0 MB/s 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:08.745 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.004 1+0 records in 00:06:09.004 1+0 records out 00:06:09.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710175 s, 5.8 MB/s 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:09.004 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.263 1+0 records in 00:06:09.263 1+0 records out 00:06:09.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034428 s, 11.9 MB/s 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:09.263 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.521 1+0 records in 00:06:09.521 1+0 records out 00:06:09.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270185 s, 15.2 MB/s 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:09.521 14:42:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.780 1+0 records in 00:06:09.780 1+0 records out 00:06:09.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560994 s, 7.3 MB/s 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:09.780 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.040 1+0 records in 00:06:10.040 1+0 records out 00:06:10.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440722 s, 9.3 MB/s 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:10.040 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.300 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:10.300 { 00:06:10.300 "nbd_device": "/dev/nbd0", 00:06:10.300 "bdev_name": "Nvme0n1" 00:06:10.300 }, 00:06:10.300 { 00:06:10.300 "nbd_device": "/dev/nbd1", 00:06:10.300 "bdev_name": "Nvme1n1" 00:06:10.300 }, 00:06:10.300 { 00:06:10.300 "nbd_device": "/dev/nbd2", 00:06:10.300 "bdev_name": "Nvme2n1" 00:06:10.300 }, 00:06:10.300 { 00:06:10.300 "nbd_device": "/dev/nbd3", 00:06:10.300 "bdev_name": "Nvme2n2" 00:06:10.300 }, 00:06:10.300 { 00:06:10.300 "nbd_device": "/dev/nbd4", 00:06:10.300 "bdev_name": "Nvme2n3" 00:06:10.300 }, 00:06:10.301 { 00:06:10.301 "nbd_device": "/dev/nbd5", 00:06:10.301 "bdev_name": "Nvme3n1" 00:06:10.301 } 00:06:10.301 ]' 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:10.301 { 00:06:10.301 "nbd_device": "/dev/nbd0", 00:06:10.301 "bdev_name": "Nvme0n1" 00:06:10.301 }, 00:06:10.301 { 00:06:10.301 "nbd_device": "/dev/nbd1", 00:06:10.301 "bdev_name": "Nvme1n1" 00:06:10.301 }, 00:06:10.301 { 00:06:10.301 "nbd_device": "/dev/nbd2", 00:06:10.301 "bdev_name": "Nvme2n1" 00:06:10.301 }, 00:06:10.301 { 00:06:10.301 "nbd_device": "/dev/nbd3", 00:06:10.301 "bdev_name": "Nvme2n2" 00:06:10.301 }, 00:06:10.301 { 00:06:10.301 "nbd_device": "/dev/nbd4", 00:06:10.301 "bdev_name": "Nvme2n3" 00:06:10.301 }, 00:06:10.301 { 00:06:10.301 "nbd_device": "/dev/nbd5", 00:06:10.301 "bdev_name": "Nvme3n1" 00:06:10.301 } 00:06:10.301 ]' 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.301 14:42:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.562 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.562 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.562 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.562 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.562 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.562 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.562 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.562 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.562 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.562 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:10.823 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:10.823 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:10.823 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:10.823 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.823 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.823 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:10.823 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.823 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.823 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.823 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:11.084 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:11.084 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:11.084 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:11.084 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.084 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.084 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:11.084 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.084 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.084 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.084 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:11.342 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:11.343 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:11.343 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:11.343 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.343 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.343 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:11.343 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.343 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.343 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.343 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:11.603 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:11.603 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:11.603 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:11.603 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.603 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.603 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:11.603 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.603 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.603 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.603 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.603 14:42:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:11.603 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.604 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:11.604 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.604 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.604 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.604 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:11.604 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.604 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:11.604 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.604 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:11.604 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:11.865 /dev/nbd0 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.865 1+0 records in 00:06:11.865 1+0 records out 00:06:11.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325454 s, 12.6 MB/s 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:11.865 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:12.124 /dev/nbd1 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.124 1+0 records in 00:06:12.124 1+0 records out 00:06:12.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305244 s, 13.4 MB/s 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:12.124 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:12.385 /dev/nbd10 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.385 1+0 records in 00:06:12.385 1+0 records out 00:06:12.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382622 s, 10.7 MB/s 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:12.385 14:42:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:12.644 /dev/nbd11 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.644 1+0 records in 00:06:12.644 1+0 records out 00:06:12.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526808 s, 7.8 MB/s 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:12.644 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:12.903 /dev/nbd12 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.903 1+0 records in 00:06:12.903 1+0 records out 00:06:12.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620421 s, 6.6 MB/s 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:12.903 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:13.162 /dev/nbd13 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.162 1+0 records in 00:06:13.162 1+0 records out 00:06:13.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561978 s, 7.3 MB/s 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.162 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.420 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.420 { 00:06:13.420 "nbd_device": "/dev/nbd0", 00:06:13.420 "bdev_name": "Nvme0n1" 00:06:13.420 }, 00:06:13.420 { 00:06:13.420 "nbd_device": "/dev/nbd1", 00:06:13.420 "bdev_name": "Nvme1n1" 00:06:13.420 }, 00:06:13.420 { 00:06:13.420 "nbd_device": "/dev/nbd10", 00:06:13.420 "bdev_name": "Nvme2n1" 00:06:13.420 }, 00:06:13.420 { 00:06:13.421 "nbd_device": "/dev/nbd11", 00:06:13.421 "bdev_name": "Nvme2n2" 00:06:13.421 }, 00:06:13.421 { 00:06:13.421 "nbd_device": "/dev/nbd12", 00:06:13.421 "bdev_name": "Nvme2n3" 00:06:13.421 }, 00:06:13.421 { 00:06:13.421 "nbd_device": "/dev/nbd13", 00:06:13.421 "bdev_name": "Nvme3n1" 00:06:13.421 } 00:06:13.421 ]' 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.421 { 00:06:13.421 "nbd_device": "/dev/nbd0", 00:06:13.421 "bdev_name": "Nvme0n1" 00:06:13.421 }, 00:06:13.421 { 00:06:13.421 "nbd_device": "/dev/nbd1", 00:06:13.421 "bdev_name": "Nvme1n1" 00:06:13.421 }, 00:06:13.421 { 00:06:13.421 "nbd_device": "/dev/nbd10", 00:06:13.421 "bdev_name": "Nvme2n1" 00:06:13.421 }, 00:06:13.421 { 00:06:13.421 "nbd_device": "/dev/nbd11", 00:06:13.421 "bdev_name": "Nvme2n2" 00:06:13.421 }, 00:06:13.421 { 00:06:13.421 "nbd_device": "/dev/nbd12", 00:06:13.421 "bdev_name": "Nvme2n3" 00:06:13.421 }, 00:06:13.421 { 00:06:13.421 "nbd_device": "/dev/nbd13", 00:06:13.421 "bdev_name": "Nvme3n1" 00:06:13.421 } 00:06:13.421 ]' 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.421 /dev/nbd1 00:06:13.421 /dev/nbd10 00:06:13.421 /dev/nbd11 00:06:13.421 /dev/nbd12 00:06:13.421 /dev/nbd13' 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.421 /dev/nbd1 00:06:13.421 /dev/nbd10 00:06:13.421 /dev/nbd11 00:06:13.421 /dev/nbd12 00:06:13.421 /dev/nbd13' 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:13.421 256+0 records in 00:06:13.421 256+0 records out 00:06:13.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00606697 s, 173 MB/s 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.421 256+0 records in 00:06:13.421 256+0 records out 00:06:13.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0503029 s, 20.8 MB/s 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.421 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.679 256+0 records in 00:06:13.679 256+0 records out 00:06:13.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167603 s, 6.3 MB/s 00:06:13.679 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.679 14:42:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:13.937 256+0 records in 00:06:13.937 256+0 records out 00:06:13.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.23222 s, 4.5 MB/s 00:06:13.937 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.937 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:13.937 256+0 records in 00:06:13.937 256+0 records out 00:06:13.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.111679 s, 9.4 MB/s 00:06:13.937 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.937 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:13.937 256+0 records in 00:06:13.937 256+0 records out 00:06:13.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0764061 s, 13.7 MB/s 00:06:13.937 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.937 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:14.195 256+0 records in 00:06:14.195 256+0 records out 00:06:14.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.109748 s, 9.6 MB/s 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.195 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.454 14:42:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:14.723 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:14.723 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:14.723 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:14.723 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.723 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.723 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:14.723 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.723 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.723 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.723 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:15.011 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:15.011 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:15.011 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:15.011 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.011 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.011 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:15.011 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.011 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.011 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.011 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.269 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:15.528 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.528 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.528 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.528 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.528 14:43:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:15.528 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:15.786 malloc_lvol_verify 00:06:15.786 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:16.045 0eb00844-3d2e-4269-bd5e-6c6c7e2dcd2c 00:06:16.045 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:16.303 9ab2ed48-75c0-4ce8-8b8f-448a2d7af0ba 00:06:16.303 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:16.562 /dev/nbd0 00:06:16.562 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:16.562 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:16.562 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:16.562 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:16.562 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:16.562 mke2fs 1.47.0 (5-Feb-2023) 00:06:16.562 Discarding device blocks: 0/4096 done 00:06:16.562 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:16.562 00:06:16.562 Allocating group tables: 0/1 done 00:06:16.562 Writing inode tables: 0/1 done 00:06:16.562 Creating journal (1024 blocks): done 00:06:16.562 Writing superblocks and filesystem accounting information: 0/1 done 00:06:16.562 00:06:16.562 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:16.562 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.562 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:16.562 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.562 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:16.562 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.562 14:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.562 14:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.562 14:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.562 14:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.562 14:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.562 14:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.562 14:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59868 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59868 ']' 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59868 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59868 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.822 killing process with pid 59868 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59868' 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59868 00:06:16.822 14:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59868 00:06:17.392 14:43:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:17.392 00:06:17.392 real 0m9.812s 00:06:17.392 user 0m13.757s 00:06:17.392 sys 0m3.169s 00:06:17.392 14:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.392 14:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:17.392 ************************************ 00:06:17.392 END TEST bdev_nbd 00:06:17.392 ************************************ 00:06:17.650 skipping fio tests on NVMe due to multi-ns failures. 00:06:17.650 14:43:02 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:17.650 14:43:02 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:17.650 14:43:02 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:17.650 14:43:02 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:17.650 14:43:02 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:17.650 14:43:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:17.650 14:43:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.650 14:43:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:17.650 ************************************ 00:06:17.650 START TEST bdev_verify 00:06:17.650 ************************************ 00:06:17.650 14:43:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:17.650 [2024-11-17 14:43:03.019464] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:17.650 [2024-11-17 14:43:03.019593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60241 ] 00:06:17.650 [2024-11-17 14:43:03.178649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.909 [2024-11-17 14:43:03.276673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.909 [2024-11-17 14:43:03.276692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.475 Running I/O for 5 seconds... 00:06:20.785 22080.00 IOPS, 86.25 MiB/s [2024-11-17T14:43:07.263Z] 22464.00 IOPS, 87.75 MiB/s [2024-11-17T14:43:08.197Z] 22869.33 IOPS, 89.33 MiB/s [2024-11-17T14:43:09.133Z] 22896.00 IOPS, 89.44 MiB/s [2024-11-17T14:43:09.133Z] 22579.20 IOPS, 88.20 MiB/s 00:06:23.590 Latency(us) 00:06:23.590 [2024-11-17T14:43:09.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:23.590 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:23.590 Verification LBA range: start 0x0 length 0xbd0bd 00:06:23.590 Nvme0n1 : 5.07 1817.47 7.10 0.00 0.00 70146.62 9880.81 102841.11 00:06:23.590 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:23.590 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:23.590 Nvme0n1 : 5.08 1888.57 7.38 0.00 0.00 66748.64 10384.94 67350.84 00:06:23.590 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:23.590 Verification LBA range: start 0x0 length 0xa0000 00:06:23.590 Nvme1n1 : 5.07 1817.00 7.10 0.00 0.00 70004.38 12149.37 88322.36 00:06:23.590 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:23.590 Verification LBA range: start 0xa0000 length 0xa0000 00:06:23.590 Nvme1n1 : 5.07 1891.92 7.39 0.00 0.00 67480.41 10435.35 95985.03 00:06:23.590 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:23.590 Verification LBA range: start 0x0 length 0x80000 00:06:23.590 Nvme2n1 : 5.07 1816.43 7.10 0.00 0.00 69878.89 13510.50 81062.99 00:06:23.590 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:23.590 Verification LBA range: start 0x80000 length 0x80000 00:06:23.590 Nvme2n1 : 5.08 1891.01 7.39 0.00 0.00 67399.32 12048.54 87515.77 00:06:23.590 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:23.590 Verification LBA range: start 0x0 length 0x80000 00:06:23.590 Nvme2n2 : 5.08 1815.59 7.09 0.00 0.00 69644.38 14317.10 73400.32 00:06:23.591 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:23.591 Verification LBA range: start 0x80000 length 0x80000 00:06:23.591 Nvme2n2 : 5.08 1890.11 7.38 0.00 0.00 67149.20 13308.85 71383.83 00:06:23.591 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:23.591 Verification LBA range: start 0x0 length 0x80000 00:06:23.591 Nvme2n3 : 5.09 1823.47 7.12 0.00 0.00 69220.90 5091.64 64931.05 00:06:23.591 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:23.591 Verification LBA range: start 0x80000 length 0x80000 00:06:23.591 Nvme2n3 : 5.08 1889.60 7.38 0.00 0.00 67017.94 13913.80 62914.56 00:06:23.591 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:23.591 Verification LBA range: start 0x0 length 0x20000 00:06:23.591 Nvme3n1 : 5.10 1831.47 7.15 0.00 0.00 68835.88 9578.34 64124.46 00:06:23.591 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:23.591 Verification LBA range: start 0x20000 length 0x20000 00:06:23.591 Nvme3n1 : 5.08 1889.10 7.38 0.00 0.00 66879.42 13208.02 65334.35 00:06:23.591 [2024-11-17T14:43:09.134Z] =================================================================================================================== 00:06:23.591 [2024-11-17T14:43:09.134Z] Total : 22261.73 86.96 0.00 0.00 68342.62 5091.64 102841.11 00:06:24.971 00:06:24.971 real 0m7.325s 00:06:24.971 user 0m13.738s 00:06:24.971 sys 0m0.218s 00:06:24.971 14:43:10 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.971 14:43:10 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:24.971 ************************************ 00:06:24.971 END TEST bdev_verify 00:06:24.971 ************************************ 00:06:24.971 14:43:10 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:24.971 14:43:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:24.971 14:43:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.971 14:43:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.971 ************************************ 00:06:24.971 START TEST bdev_verify_big_io 00:06:24.971 ************************************ 00:06:24.971 14:43:10 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:24.971 [2024-11-17 14:43:10.399669] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:24.971 [2024-11-17 14:43:10.399781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60339 ] 00:06:25.229 [2024-11-17 14:43:10.561203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.229 [2024-11-17 14:43:10.660738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.229 [2024-11-17 14:43:10.660834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.801 Running I/O for 5 seconds... 00:06:29.077 0.00 IOPS, 0.00 MiB/s [2024-11-17T14:43:15.995Z] 1171.00 IOPS, 73.19 MiB/s [2024-11-17T14:43:17.369Z] 1408.33 IOPS, 88.02 MiB/s [2024-11-17T14:43:17.628Z] 1718.00 IOPS, 107.38 MiB/s 00:06:32.085 Latency(us) 00:06:32.085 [2024-11-17T14:43:17.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:32.085 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.085 Verification LBA range: start 0x0 length 0xbd0b 00:06:32.085 Nvme0n1 : 5.80 113.32 7.08 0.00 0.00 1060411.08 23996.26 1206669.00 00:06:32.085 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.085 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:32.085 Nvme0n1 : 5.64 130.44 8.15 0.00 0.00 945000.97 16535.24 1206669.00 00:06:32.085 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.085 Verification LBA range: start 0x0 length 0xa000 00:06:32.085 Nvme1n1 : 5.87 119.92 7.50 0.00 0.00 992427.86 67754.14 1000180.18 00:06:32.085 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.085 Verification LBA range: start 0xa000 length 0xa000 00:06:32.085 Nvme1n1 : 5.77 133.03 8.31 0.00 0.00 895697.00 112116.97 1006632.96 00:06:32.085 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.085 Verification LBA range: start 0x0 length 0x8000 00:06:32.085 Nvme2n1 : 5.97 124.69 7.79 0.00 0.00 926495.55 41741.39 1006632.96 00:06:32.085 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.085 Verification LBA range: start 0x8000 length 0x8000 00:06:32.085 Nvme2n1 : 5.82 136.45 8.53 0.00 0.00 843055.56 43757.88 942105.21 00:06:32.085 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.085 Verification LBA range: start 0x0 length 0x8000 00:06:32.085 Nvme2n2 : 5.97 124.95 7.81 0.00 0.00 890755.47 41741.39 1038896.84 00:06:32.085 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.085 Verification LBA range: start 0x8000 length 0x8000 00:06:32.085 Nvme2n2 : 5.87 141.63 8.85 0.00 0.00 784943.38 53235.40 961463.53 00:06:32.085 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.085 Verification LBA range: start 0x0 length 0x8000 00:06:32.085 Nvme2n3 : 6.02 127.76 7.98 0.00 0.00 838120.81 56461.78 1064707.94 00:06:32.085 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.085 Verification LBA range: start 0x8000 length 0x8000 00:06:32.085 Nvme2n3 : 5.98 154.40 9.65 0.00 0.00 696770.98 13510.50 1000180.18 00:06:32.085 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.085 Verification LBA range: start 0x0 length 0x2000 00:06:32.085 Nvme3n1 : 6.07 147.63 9.23 0.00 0.00 708806.85 970.44 1096971.82 00:06:32.085 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.085 Verification LBA range: start 0x2000 length 0x2000 00:06:32.085 Nvme3n1 : 6.07 172.82 10.80 0.00 0.00 601315.86 708.92 1858399.31 00:06:32.085 [2024-11-17T14:43:17.628Z] =================================================================================================================== 00:06:32.085 [2024-11-17T14:43:17.628Z] Total : 1627.05 101.69 0.00 0.00 832649.25 708.92 1858399.31 00:06:33.457 00:06:33.457 real 0m8.615s 00:06:33.457 user 0m16.329s 00:06:33.457 sys 0m0.215s 00:06:33.457 14:43:18 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.457 ************************************ 00:06:33.457 END TEST bdev_verify_big_io 00:06:33.457 ************************************ 00:06:33.457 14:43:18 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:33.715 14:43:19 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.715 14:43:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:33.715 14:43:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.715 14:43:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.715 ************************************ 00:06:33.715 START TEST bdev_write_zeroes 00:06:33.715 ************************************ 00:06:33.715 14:43:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.715 [2024-11-17 14:43:19.078495] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:33.715 [2024-11-17 14:43:19.078611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60448 ] 00:06:33.715 [2024-11-17 14:43:19.238101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.972 [2024-11-17 14:43:19.335700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.537 Running I/O for 1 seconds... 00:06:35.475 61056.00 IOPS, 238.50 MiB/s 00:06:35.475 Latency(us) 00:06:35.475 [2024-11-17T14:43:21.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.475 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.475 Nvme0n1 : 1.02 10176.69 39.75 0.00 0.00 12545.55 4814.38 23693.78 00:06:35.475 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.475 Nvme1n1 : 1.02 10164.17 39.70 0.00 0.00 12544.97 8922.98 20669.05 00:06:35.475 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.475 Nvme2n1 : 1.02 10152.52 39.66 0.00 0.00 12507.45 8822.15 20064.10 00:06:35.475 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.475 Nvme2n2 : 1.02 10141.02 39.61 0.00 0.00 12474.37 8469.27 18955.03 00:06:35.475 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.475 Nvme2n3 : 1.02 10184.89 39.78 0.00 0.00 12415.34 6604.01 19358.33 00:06:35.475 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.475 Nvme3n1 : 1.03 10173.38 39.74 0.00 0.00 12402.35 5091.64 20870.70 00:06:35.475 [2024-11-17T14:43:21.018Z] =================================================================================================================== 00:06:35.475 [2024-11-17T14:43:21.018Z] Total : 60992.68 238.25 0.00 0.00 12481.52 4814.38 23693.78 00:06:36.413 00:06:36.413 real 0m2.650s 00:06:36.413 user 0m2.358s 00:06:36.413 sys 0m0.176s 00:06:36.414 14:43:21 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.414 14:43:21 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:36.414 ************************************ 00:06:36.414 END TEST bdev_write_zeroes 00:06:36.414 ************************************ 00:06:36.414 14:43:21 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.414 14:43:21 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:36.414 14:43:21 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.414 14:43:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:36.414 ************************************ 00:06:36.414 START TEST bdev_json_nonenclosed 00:06:36.414 ************************************ 00:06:36.414 14:43:21 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.414 [2024-11-17 14:43:21.796654] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:36.414 [2024-11-17 14:43:21.796765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60502 ] 00:06:36.675 [2024-11-17 14:43:21.957847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.675 [2024-11-17 14:43:22.052317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.675 [2024-11-17 14:43:22.052393] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:36.675 [2024-11-17 14:43:22.052416] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:36.675 [2024-11-17 14:43:22.052429] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.933 00:06:36.933 real 0m0.490s 00:06:36.933 user 0m0.290s 00:06:36.933 sys 0m0.097s 00:06:36.933 14:43:22 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.933 ************************************ 00:06:36.933 END TEST bdev_json_nonenclosed 00:06:36.933 ************************************ 00:06:36.933 14:43:22 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:36.933 14:43:22 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.933 14:43:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:36.933 14:43:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.933 14:43:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:36.933 ************************************ 00:06:36.933 START TEST bdev_json_nonarray 00:06:36.933 ************************************ 00:06:36.933 14:43:22 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.933 [2024-11-17 14:43:22.339295] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:36.933 [2024-11-17 14:43:22.339396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60522 ] 00:06:37.191 [2024-11-17 14:43:22.498332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.191 [2024-11-17 14:43:22.592296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.191 [2024-11-17 14:43:22.592397] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:37.191 [2024-11-17 14:43:22.592421] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:37.191 [2024-11-17 14:43:22.592434] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.450 00:06:37.450 real 0m0.487s 00:06:37.450 user 0m0.292s 00:06:37.450 sys 0m0.091s 00:06:37.450 14:43:22 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.450 14:43:22 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:37.450 ************************************ 00:06:37.450 END TEST bdev_json_nonarray 00:06:37.450 ************************************ 00:06:37.450 14:43:22 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:37.450 14:43:22 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:37.450 14:43:22 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:37.450 14:43:22 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:37.450 14:43:22 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:37.450 14:43:22 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:37.450 14:43:22 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:37.450 14:43:22 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:37.450 14:43:22 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:37.450 14:43:22 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:37.450 14:43:22 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:37.450 00:06:37.450 real 0m36.575s 00:06:37.450 user 0m56.185s 00:06:37.450 sys 0m5.229s 00:06:37.450 14:43:22 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.450 14:43:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.450 ************************************ 00:06:37.450 END TEST blockdev_nvme 00:06:37.450 ************************************ 00:06:37.450 14:43:22 -- spdk/autotest.sh@209 -- # uname -s 00:06:37.450 14:43:22 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:37.450 14:43:22 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:37.450 14:43:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.450 14:43:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.450 14:43:22 -- common/autotest_common.sh@10 -- # set +x 00:06:37.450 ************************************ 00:06:37.450 START TEST blockdev_nvme_gpt 00:06:37.450 ************************************ 00:06:37.450 14:43:22 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:37.450 * Looking for test storage... 00:06:37.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:37.450 14:43:22 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:37.450 14:43:22 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.450 14:43:22 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:37.709 14:43:23 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.709 14:43:23 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:37.709 14:43:23 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.709 14:43:23 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.709 --rc genhtml_branch_coverage=1 00:06:37.709 --rc genhtml_function_coverage=1 00:06:37.709 --rc genhtml_legend=1 00:06:37.709 --rc geninfo_all_blocks=1 00:06:37.709 --rc geninfo_unexecuted_blocks=1 00:06:37.709 00:06:37.709 ' 00:06:37.709 14:43:23 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.709 --rc genhtml_branch_coverage=1 00:06:37.709 --rc genhtml_function_coverage=1 00:06:37.709 --rc genhtml_legend=1 00:06:37.709 --rc geninfo_all_blocks=1 00:06:37.709 --rc geninfo_unexecuted_blocks=1 00:06:37.709 00:06:37.709 ' 00:06:37.709 14:43:23 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.710 --rc genhtml_branch_coverage=1 00:06:37.710 --rc genhtml_function_coverage=1 00:06:37.710 --rc genhtml_legend=1 00:06:37.710 --rc geninfo_all_blocks=1 00:06:37.710 --rc geninfo_unexecuted_blocks=1 00:06:37.710 00:06:37.710 ' 00:06:37.710 14:43:23 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.710 --rc genhtml_branch_coverage=1 00:06:37.710 --rc genhtml_function_coverage=1 00:06:37.710 --rc genhtml_legend=1 00:06:37.710 --rc geninfo_all_blocks=1 00:06:37.710 --rc geninfo_unexecuted_blocks=1 00:06:37.710 00:06:37.710 ' 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60606 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60606 00:06:37.710 14:43:23 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60606 ']' 00:06:37.710 14:43:23 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.710 14:43:23 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.710 14:43:23 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.710 14:43:23 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.710 14:43:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:37.710 14:43:23 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:37.710 [2024-11-17 14:43:23.107334] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:37.710 [2024-11-17 14:43:23.107480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60606 ] 00:06:37.969 [2024-11-17 14:43:23.265403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.969 [2024-11-17 14:43:23.362887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.538 14:43:24 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.538 14:43:24 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:38.538 14:43:24 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:38.538 14:43:24 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:38.538 14:43:24 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:38.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:39.053 Waiting for block devices as requested 00:06:39.053 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:39.053 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:39.312 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:39.312 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:44.575 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:44.575 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:44.575 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:44.576 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:44.576 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:44.576 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.576 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:44.576 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:44.576 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:44.576 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:44.576 14:43:29 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:44.576 BYT; 00:06:44.576 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:44.576 BYT; 00:06:44.576 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:44.576 14:43:29 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:44.576 14:43:29 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:45.509 The operation has completed successfully. 00:06:45.509 14:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:46.443 The operation has completed successfully. 00:06:46.444 14:43:31 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:47.010 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:47.268 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.268 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.268 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.526 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.526 14:43:32 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:47.526 14:43:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.526 14:43:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.527 [] 00:06:47.527 14:43:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.527 14:43:32 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:47.527 14:43:32 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:47.527 14:43:32 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:47.527 14:43:32 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:47.527 14:43:32 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:47.527 14:43:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.527 14:43:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.795 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.795 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:47.795 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.795 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.795 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.795 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:47.795 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.795 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:47.795 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.795 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:47.795 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:47.796 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b06eb21b-842f-49c4-ad4a-6cc8df8d6ea8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b06eb21b-842f-49c4-ad4a-6cc8df8d6ea8",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c835a8ba-c63a-4383-bcde-d8c6156f794b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c835a8ba-c63a-4383-bcde-d8c6156f794b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5536f1ee-971f-45aa-9946-54cc6f5bb514"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5536f1ee-971f-45aa-9946-54cc6f5bb514",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2f53c1f0-855d-48ba-9829-b9f7c0207349"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2f53c1f0-855d-48ba-9829-b9f7c0207349",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f8d9a5f7-b976-4732-936a-27bc0e3da451"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f8d9a5f7-b976-4732-936a-27bc0e3da451",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:48.053 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:48.054 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:48.054 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:48.054 14:43:33 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60606 00:06:48.054 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60606 ']' 00:06:48.054 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60606 00:06:48.054 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:48.054 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.054 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60606 00:06:48.054 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.054 killing process with pid 60606 00:06:48.054 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.054 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60606' 00:06:48.054 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60606 00:06:48.054 14:43:33 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60606 00:06:48.987 14:43:34 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:48.987 14:43:34 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:48.987 14:43:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:48.987 14:43:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.987 14:43:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.987 ************************************ 00:06:48.987 START TEST bdev_hello_world 00:06:48.987 ************************************ 00:06:48.987 14:43:34 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:49.246 [2024-11-17 14:43:34.577401] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:49.246 [2024-11-17 14:43:34.577511] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61226 ] 00:06:49.246 [2024-11-17 14:43:34.732627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.503 [2024-11-17 14:43:34.810312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.761 [2024-11-17 14:43:35.297406] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:49.761 [2024-11-17 14:43:35.297445] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:49.761 [2024-11-17 14:43:35.297462] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:49.761 [2024-11-17 14:43:35.299372] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:49.761 [2024-11-17 14:43:35.299870] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:49.761 [2024-11-17 14:43:35.299893] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:49.761 [2024-11-17 14:43:35.300093] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:49.761 00:06:49.761 [2024-11-17 14:43:35.300115] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:50.325 00:06:50.325 real 0m1.335s 00:06:50.325 user 0m1.088s 00:06:50.325 sys 0m0.143s 00:06:50.325 ************************************ 00:06:50.325 END TEST bdev_hello_world 00:06:50.325 ************************************ 00:06:50.325 14:43:35 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.325 14:43:35 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:50.583 14:43:35 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:50.583 14:43:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.583 14:43:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.583 14:43:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.583 ************************************ 00:06:50.583 START TEST bdev_bounds 00:06:50.583 ************************************ 00:06:50.583 14:43:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:50.583 14:43:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61263 00:06:50.583 14:43:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.583 Process bdevio pid: 61263 00:06:50.583 14:43:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61263' 00:06:50.583 14:43:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61263 00:06:50.583 14:43:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61263 ']' 00:06:50.583 14:43:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.583 14:43:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:50.583 14:43:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.583 14:43:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.583 14:43:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.583 14:43:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:50.583 [2024-11-17 14:43:35.958008] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:50.584 [2024-11-17 14:43:35.958123] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61263 ] 00:06:50.584 [2024-11-17 14:43:36.118434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.841 [2024-11-17 14:43:36.215349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.841 [2024-11-17 14:43:36.216004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.841 [2024-11-17 14:43:36.216019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.407 14:43:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.407 14:43:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:51.407 14:43:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:51.407 I/O targets: 00:06:51.407 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:51.407 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:51.407 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:51.407 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:51.407 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:51.407 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:51.407 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:51.407 00:06:51.407 00:06:51.407 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.407 http://cunit.sourceforge.net/ 00:06:51.407 00:06:51.407 00:06:51.407 Suite: bdevio tests on: Nvme3n1 00:06:51.407 Test: blockdev write read block ...passed 00:06:51.407 Test: blockdev write zeroes read block ...passed 00:06:51.407 Test: blockdev write zeroes read no split ...passed 00:06:51.407 Test: blockdev write zeroes read split ...passed 00:06:51.407 Test: blockdev write zeroes read split partial ...passed 00:06:51.407 Test: blockdev reset ...[2024-11-17 14:43:36.924544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:51.407 [2024-11-17 14:43:36.927252] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:51.407 passed 00:06:51.407 Test: blockdev write read 8 blocks ...passed 00:06:51.407 Test: blockdev write read size > 128k ...passed 00:06:51.407 Test: blockdev write read invalid size ...passed 00:06:51.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.407 Test: blockdev write read max offset ...passed 00:06:51.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.407 Test: blockdev writev readv 8 blocks ...passed 00:06:51.407 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.407 Test: blockdev writev readv block ...passed 00:06:51.407 Test: blockdev writev readv size > 128k ...passed 00:06:51.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.407 Test: blockdev comparev and writev ...[2024-11-17 14:43:36.934811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb404000 len:0x1000 00:06:51.407 [2024-11-17 14:43:36.934858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:51.407 passed 00:06:51.407 Test: blockdev nvme passthru rw ...passed 00:06:51.407 Test: blockdev nvme passthru vendor specific ...[2024-11-17 14:43:36.935469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:51.407 [2024-11-17 14:43:36.935496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:51.407 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:51.407 passed 00:06:51.407 Test: blockdev copy ...passed 00:06:51.407 Suite: bdevio tests on: Nvme2n3 00:06:51.407 Test: blockdev write read block ...passed 00:06:51.407 Test: blockdev write zeroes read block ...passed 00:06:51.666 Test: blockdev write zeroes read no split ...passed 00:06:51.666 Test: blockdev write zeroes read split ...passed 00:06:51.666 Test: blockdev write zeroes read split partial ...passed 00:06:51.666 Test: blockdev reset ...[2024-11-17 14:43:36.991968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:51.666 [2024-11-17 14:43:36.995023] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:51.666 passed 00:06:51.666 Test: blockdev write read 8 blocks ...passed 00:06:51.666 Test: blockdev write read size > 128k ...passed 00:06:51.666 Test: blockdev write read invalid size ...passed 00:06:51.666 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.666 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.666 Test: blockdev write read max offset ...passed 00:06:51.666 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.666 Test: blockdev writev readv 8 blocks ...passed 00:06:51.666 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.666 Test: blockdev writev readv block ...passed 00:06:51.666 Test: blockdev writev readv size > 128k ...passed 00:06:51.666 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.666 Test: blockdev comparev and writev ...[2024-11-17 14:43:37.001132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb402000 len:0x1000 00:06:51.666 passed 00:06:51.666 Test: blockdev nvme passthru rw ...[2024-11-17 14:43:37.001174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:51.666 passed 00:06:51.666 Test: blockdev nvme passthru vendor specific ...passed 00:06:51.666 Test: blockdev nvme admin passthru ...[2024-11-17 14:43:37.001631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:51.666 [2024-11-17 14:43:37.001655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:51.666 passed 00:06:51.666 Test: blockdev copy ...passed 00:06:51.666 Suite: bdevio tests on: Nvme2n2 00:06:51.666 Test: blockdev write read block ...passed 00:06:51.666 Test: blockdev write zeroes read block ...passed 00:06:51.666 Test: blockdev write zeroes read no split ...passed 00:06:51.666 Test: blockdev write zeroes read split ...passed 00:06:51.666 Test: blockdev write zeroes read split partial ...passed 00:06:51.666 Test: blockdev reset ...[2024-11-17 14:43:37.044886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:51.666 [2024-11-17 14:43:37.047722] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:51.666 passed 00:06:51.666 Test: blockdev write read 8 blocks ...passed 00:06:51.666 Test: blockdev write read size > 128k ...passed 00:06:51.666 Test: blockdev write read invalid size ...passed 00:06:51.666 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.666 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.666 Test: blockdev write read max offset ...passed 00:06:51.666 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.666 Test: blockdev writev readv 8 blocks ...passed 00:06:51.666 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.666 Test: blockdev writev readv block ...passed 00:06:51.666 Test: blockdev writev readv size > 128k ...passed 00:06:51.666 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.666 Test: blockdev comparev and writev ...[2024-11-17 14:43:37.054284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2db238000 len:0x1000 00:06:51.666 [2024-11-17 14:43:37.054322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:51.666 passed 00:06:51.666 Test: blockdev nvme passthru rw ...passed 00:06:51.666 Test: blockdev nvme passthru vendor specific ...passed 00:06:51.666 Test: blockdev nvme admin passthru ...[2024-11-17 14:43:37.054760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:51.666 [2024-11-17 14:43:37.054783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:51.666 passed 00:06:51.666 Test: blockdev copy ...passed 00:06:51.666 Suite: bdevio tests on: Nvme2n1 00:06:51.666 Test: blockdev write read block ...passed 00:06:51.666 Test: blockdev write zeroes read block ...passed 00:06:51.666 Test: blockdev write zeroes read no split ...passed 00:06:51.666 Test: blockdev write zeroes read split ...passed 00:06:51.666 Test: blockdev write zeroes read split partial ...passed 00:06:51.666 Test: blockdev reset ...[2024-11-17 14:43:37.096288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:51.666 [2024-11-17 14:43:37.099073] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:51.666 passed 00:06:51.666 Test: blockdev write read 8 blocks ...passed 00:06:51.666 Test: blockdev write read size > 128k ...passed 00:06:51.666 Test: blockdev write read invalid size ...passed 00:06:51.666 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.666 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.666 Test: blockdev write read max offset ...passed 00:06:51.666 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.666 Test: blockdev writev readv 8 blocks ...passed 00:06:51.666 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.666 Test: blockdev writev readv block ...passed 00:06:51.666 Test: blockdev writev readv size > 128k ...passed 00:06:51.666 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.666 Test: blockdev comparev and writev ...[2024-11-17 14:43:37.105971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2db234000 len:0x1000 00:06:51.666 [2024-11-17 14:43:37.106014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:51.666 passed 00:06:51.666 Test: blockdev nvme passthru rw ...passed 00:06:51.666 Test: blockdev nvme passthru vendor specific ...passed 00:06:51.666 Test: blockdev nvme admin passthru ...[2024-11-17 14:43:37.106583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:51.666 [2024-11-17 14:43:37.106608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:51.666 passed 00:06:51.666 Test: blockdev copy ...passed 00:06:51.666 Suite: bdevio tests on: Nvme1n1p2 00:06:51.666 Test: blockdev write read block ...passed 00:06:51.666 Test: blockdev write zeroes read block ...passed 00:06:51.666 Test: blockdev write zeroes read no split ...passed 00:06:51.666 Test: blockdev write zeroes read split ...passed 00:06:51.666 Test: blockdev write zeroes read split partial ...passed 00:06:51.666 Test: blockdev reset ...[2024-11-17 14:43:37.163263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:51.666 [2024-11-17 14:43:37.165850] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:51.666 passed 00:06:51.666 Test: blockdev write read 8 blocks ...passed 00:06:51.666 Test: blockdev write read size > 128k ...passed 00:06:51.666 Test: blockdev write read invalid size ...passed 00:06:51.666 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.666 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.666 Test: blockdev write read max offset ...passed 00:06:51.666 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.666 Test: blockdev writev readv 8 blocks ...passed 00:06:51.666 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.666 Test: blockdev writev readv block ...passed 00:06:51.666 Test: blockdev writev readv size > 128k ...passed 00:06:51.666 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.666 Test: blockdev comparev and writev ...[2024-11-17 14:43:37.174823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2db230000 len:0x1000 00:06:51.666 [2024-11-17 14:43:37.174949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:51.666 passed 00:06:51.666 Test: blockdev nvme passthru rw ...passed 00:06:51.666 Test: blockdev nvme passthru vendor specific ...passed 00:06:51.666 Test: blockdev nvme admin passthru ...passed 00:06:51.666 Test: blockdev copy ...passed 00:06:51.666 Suite: bdevio tests on: Nvme1n1p1 00:06:51.666 Test: blockdev write read block ...passed 00:06:51.666 Test: blockdev write zeroes read block ...passed 00:06:51.666 Test: blockdev write zeroes read no split ...passed 00:06:51.666 Test: blockdev write zeroes read split ...passed 00:06:51.925 Test: blockdev write zeroes read split partial ...passed 00:06:51.925 Test: blockdev reset ...[2024-11-17 14:43:37.220278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:51.925 [2024-11-17 14:43:37.222811] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:51.925 passed 00:06:51.925 Test: blockdev write read 8 blocks ...passed 00:06:51.925 Test: blockdev write read size > 128k ...passed 00:06:51.925 Test: blockdev write read invalid size ...passed 00:06:51.925 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.925 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.925 Test: blockdev write read max offset ...passed 00:06:51.925 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.925 Test: blockdev writev readv 8 blocks ...passed 00:06:51.925 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.925 Test: blockdev writev readv block ...passed 00:06:51.925 Test: blockdev writev readv size > 128k ...passed 00:06:51.925 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.925 Test: blockdev comparev and writev ...[2024-11-17 14:43:37.229849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bbe0e000 len:0x1000 00:06:51.925 [2024-11-17 14:43:37.229888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:51.925 passed 00:06:51.925 Test: blockdev nvme passthru rw ...passed 00:06:51.925 Test: blockdev nvme passthru vendor specific ...passed 00:06:51.925 Test: blockdev nvme admin passthru ...passed 00:06:51.925 Test: blockdev copy ...passed 00:06:51.925 Suite: bdevio tests on: Nvme0n1 00:06:51.925 Test: blockdev write read block ...passed 00:06:51.925 Test: blockdev write zeroes read block ...passed 00:06:51.925 Test: blockdev write zeroes read no split ...passed 00:06:51.925 Test: blockdev write zeroes read split ...passed 00:06:51.925 Test: blockdev write zeroes read split partial ...passed 00:06:51.925 Test: blockdev reset ...[2024-11-17 14:43:37.272209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:51.925 [2024-11-17 14:43:37.274735] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:51.925 passed 00:06:51.925 Test: blockdev write read 8 blocks ...passed 00:06:51.925 Test: blockdev write read size > 128k ...passed 00:06:51.925 Test: blockdev write read invalid size ...passed 00:06:51.925 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.925 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.925 Test: blockdev write read max offset ...passed 00:06:51.925 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.925 Test: blockdev writev readv 8 blocks ...passed 00:06:51.925 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.925 Test: blockdev writev readv block ...passed 00:06:51.925 Test: blockdev writev readv size > 128k ...passed 00:06:51.925 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.925 Test: blockdev comparev and writev ...[2024-11-17 14:43:37.280160] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:51.925 separate metadata which is not supported yet. 00:06:51.925 passed 00:06:51.925 Test: blockdev nvme passthru rw ...passed 00:06:51.925 Test: blockdev nvme passthru vendor specific ...[2024-11-17 14:43:37.280631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:51.925 [2024-11-17 14:43:37.280663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:51.925 passed 00:06:51.925 Test: blockdev nvme admin passthru ...passed 00:06:51.925 Test: blockdev copy ...passed 00:06:51.925 00:06:51.925 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.925 suites 7 7 n/a 0 0 00:06:51.925 tests 161 161 161 0 0 00:06:51.925 asserts 1025 1025 1025 0 n/a 00:06:51.925 00:06:51.925 Elapsed time = 1.075 seconds 00:06:51.925 0 00:06:51.925 14:43:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61263 00:06:51.925 14:43:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61263 ']' 00:06:51.925 14:43:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61263 00:06:51.925 14:43:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:51.925 14:43:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.925 14:43:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61263 00:06:51.925 14:43:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.925 14:43:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.925 killing process with pid 61263 00:06:51.925 14:43:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61263' 00:06:51.925 14:43:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61263 00:06:51.925 14:43:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61263 00:06:52.492 14:43:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:52.492 00:06:52.492 real 0m2.078s 00:06:52.492 user 0m5.335s 00:06:52.492 sys 0m0.252s 00:06:52.492 14:43:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.492 14:43:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:52.492 ************************************ 00:06:52.492 END TEST bdev_bounds 00:06:52.492 ************************************ 00:06:52.492 14:43:38 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:52.492 14:43:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:52.492 14:43:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.492 14:43:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.750 ************************************ 00:06:52.750 START TEST bdev_nbd 00:06:52.750 ************************************ 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61318 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61318 /var/tmp/spdk-nbd.sock 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61318 ']' 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:52.750 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:52.750 [2024-11-17 14:43:38.105153] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:52.751 [2024-11-17 14:43:38.105268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.751 [2024-11-17 14:43:38.266344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.008 [2024-11-17 14:43:38.361648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:53.576 14:43:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.838 1+0 records in 00:06:53.838 1+0 records out 00:06:53.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611881 s, 6.7 MB/s 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:53.838 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.096 1+0 records in 00:06:54.096 1+0 records out 00:06:54.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457315 s, 9.0 MB/s 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.096 1+0 records in 00:06:54.096 1+0 records out 00:06:54.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814595 s, 5.0 MB/s 00:06:54.096 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.355 1+0 records in 00:06:54.355 1+0 records out 00:06:54.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125905 s, 3.3 MB/s 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.355 14:43:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.613 1+0 records in 00:06:54.613 1+0 records out 00:06:54.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627444 s, 6.5 MB/s 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.613 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.872 1+0 records in 00:06:54.872 1+0 records out 00:06:54.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519437 s, 7.9 MB/s 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.872 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.131 1+0 records in 00:06:55.131 1+0 records out 00:06:55.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083688 s, 4.9 MB/s 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.131 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.390 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd0", 00:06:55.390 "bdev_name": "Nvme0n1" 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd1", 00:06:55.390 "bdev_name": "Nvme1n1p1" 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd2", 00:06:55.390 "bdev_name": "Nvme1n1p2" 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd3", 00:06:55.390 "bdev_name": "Nvme2n1" 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd4", 00:06:55.390 "bdev_name": "Nvme2n2" 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd5", 00:06:55.390 "bdev_name": "Nvme2n3" 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd6", 00:06:55.390 "bdev_name": "Nvme3n1" 00:06:55.390 } 00:06:55.390 ]' 00:06:55.390 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:55.390 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd0", 00:06:55.390 "bdev_name": "Nvme0n1" 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd1", 00:06:55.390 "bdev_name": "Nvme1n1p1" 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd2", 00:06:55.390 "bdev_name": "Nvme1n1p2" 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd3", 00:06:55.390 "bdev_name": "Nvme2n1" 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd4", 00:06:55.390 "bdev_name": "Nvme2n2" 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd5", 00:06:55.390 "bdev_name": "Nvme2n3" 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "nbd_device": "/dev/nbd6", 00:06:55.390 "bdev_name": "Nvme3n1" 00:06:55.390 } 00:06:55.390 ]' 00:06:55.390 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:55.390 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:55.390 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.390 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:55.390 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:55.390 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:55.390 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.390 14:43:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.649 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.649 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.649 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.649 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.649 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.649 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.649 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.649 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.649 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.649 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.907 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.907 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.907 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.908 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.908 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.908 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.908 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.908 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.908 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.908 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.166 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:56.423 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:56.423 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:56.423 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:56.423 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.423 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.423 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:56.423 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.423 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.423 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.423 14:43:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:56.680 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:56.680 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:56.680 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:56.680 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.680 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.680 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:56.680 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.680 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.680 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.680 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:56.938 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:56.938 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:56.938 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:56.938 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.938 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.938 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:56.938 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.938 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.938 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.938 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.938 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.196 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.196 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.196 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.196 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.196 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.196 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.196 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:57.196 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.196 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.196 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:57.196 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:57.196 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:57.197 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:57.197 /dev/nbd0 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.454 1+0 records in 00:06:57.454 1+0 records out 00:06:57.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370932 s, 11.0 MB/s 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.454 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:57.455 /dev/nbd1 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.455 1+0 records in 00:06:57.455 1+0 records out 00:06:57.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456148 s, 9.0 MB/s 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:57.455 14:43:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:57.712 /dev/nbd10 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.712 1+0 records in 00:06:57.712 1+0 records out 00:06:57.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374687 s, 10.9 MB/s 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:57.712 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:57.970 /dev/nbd11 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.970 1+0 records in 00:06:57.970 1+0 records out 00:06:57.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294213 s, 13.9 MB/s 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:57.970 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:58.227 /dev/nbd12 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.227 1+0 records in 00:06:58.227 1+0 records out 00:06:58.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344695 s, 11.9 MB/s 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.227 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:58.484 /dev/nbd13 00:06:58.484 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:58.484 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:58.484 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:58.484 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.484 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.484 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.484 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:58.484 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.484 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.484 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.485 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.485 1+0 records in 00:06:58.485 1+0 records out 00:06:58.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482536 s, 8.5 MB/s 00:06:58.485 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.485 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.485 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.485 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.485 14:43:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.485 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.485 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.485 14:43:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:58.742 /dev/nbd14 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.742 1+0 records in 00:06:58.742 1+0 records out 00:06:58.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478801 s, 8.6 MB/s 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.742 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.743 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.743 14:43:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.743 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.743 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.743 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.743 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.743 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd0", 00:06:59.001 "bdev_name": "Nvme0n1" 00:06:59.001 }, 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd1", 00:06:59.001 "bdev_name": "Nvme1n1p1" 00:06:59.001 }, 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd10", 00:06:59.001 "bdev_name": "Nvme1n1p2" 00:06:59.001 }, 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd11", 00:06:59.001 "bdev_name": "Nvme2n1" 00:06:59.001 }, 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd12", 00:06:59.001 "bdev_name": "Nvme2n2" 00:06:59.001 }, 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd13", 00:06:59.001 "bdev_name": "Nvme2n3" 00:06:59.001 }, 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd14", 00:06:59.001 "bdev_name": "Nvme3n1" 00:06:59.001 } 00:06:59.001 ]' 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd0", 00:06:59.001 "bdev_name": "Nvme0n1" 00:06:59.001 }, 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd1", 00:06:59.001 "bdev_name": "Nvme1n1p1" 00:06:59.001 }, 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd10", 00:06:59.001 "bdev_name": "Nvme1n1p2" 00:06:59.001 }, 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd11", 00:06:59.001 "bdev_name": "Nvme2n1" 00:06:59.001 }, 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd12", 00:06:59.001 "bdev_name": "Nvme2n2" 00:06:59.001 }, 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd13", 00:06:59.001 "bdev_name": "Nvme2n3" 00:06:59.001 }, 00:06:59.001 { 00:06:59.001 "nbd_device": "/dev/nbd14", 00:06:59.001 "bdev_name": "Nvme3n1" 00:06:59.001 } 00:06:59.001 ]' 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.001 /dev/nbd1 00:06:59.001 /dev/nbd10 00:06:59.001 /dev/nbd11 00:06:59.001 /dev/nbd12 00:06:59.001 /dev/nbd13 00:06:59.001 /dev/nbd14' 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.001 /dev/nbd1 00:06:59.001 /dev/nbd10 00:06:59.001 /dev/nbd11 00:06:59.001 /dev/nbd12 00:06:59.001 /dev/nbd13 00:06:59.001 /dev/nbd14' 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:59.001 256+0 records in 00:06:59.001 256+0 records out 00:06:59.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00933787 s, 112 MB/s 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.001 256+0 records in 00:06:59.001 256+0 records out 00:06:59.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0730336 s, 14.4 MB/s 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.001 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.259 256+0 records in 00:06:59.259 256+0 records out 00:06:59.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0738481 s, 14.2 MB/s 00:06:59.259 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.259 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:59.259 256+0 records in 00:06:59.259 256+0 records out 00:06:59.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0733085 s, 14.3 MB/s 00:06:59.259 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.259 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:59.259 256+0 records in 00:06:59.259 256+0 records out 00:06:59.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0781327 s, 13.4 MB/s 00:06:59.259 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.259 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:59.259 256+0 records in 00:06:59.259 256+0 records out 00:06:59.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0701106 s, 15.0 MB/s 00:06:59.259 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.259 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:59.516 256+0 records in 00:06:59.516 256+0 records out 00:06:59.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0786618 s, 13.3 MB/s 00:06:59.516 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.516 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:59.516 256+0 records in 00:06:59.516 256+0 records out 00:06:59.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0725826 s, 14.4 MB/s 00:06:59.516 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:59.516 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:59.516 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.516 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.516 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:59.516 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.516 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.516 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.516 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:59.516 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.517 14:43:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.776 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.776 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.776 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.776 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.776 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.776 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.776 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.776 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.776 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.776 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.111 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:00.369 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:00.369 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:00.369 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:00.369 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.369 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.369 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:00.369 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.369 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.369 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.369 14:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:00.627 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:00.627 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:00.627 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:00.627 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.627 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.627 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:00.627 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.627 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.627 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.627 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:00.885 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:00.885 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:00.885 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:00.885 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.885 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.885 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:00.885 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.885 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.885 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.885 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:01.142 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:01.142 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:01.142 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:01.142 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.142 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.142 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:01.142 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.142 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.142 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.142 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.142 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:01.401 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:01.659 malloc_lvol_verify 00:07:01.659 14:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:01.659 328c9f7c-8add-4042-96f2-c9d2ed79e90a 00:07:01.659 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:01.916 0c6a5b31-24a0-4281-87f4-35eb29f58013 00:07:01.916 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:02.174 /dev/nbd0 00:07:02.174 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:02.174 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:02.174 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:02.174 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:02.174 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:02.174 Discarding device blocks: 0/4096mke2fs 1.47.0 (5-Feb-2023) 00:07:02.174  done 00:07:02.174 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:02.174 00:07:02.174 Allocating group tables: 0/1 done 00:07:02.174 Writing inode tables: 0/1 done 00:07:02.174 Creating journal (1024 blocks): done 00:07:02.174 Writing superblocks and filesystem accounting information: 0/1 done 00:07:02.174 00:07:02.174 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:02.174 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.174 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:02.174 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.174 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:02.174 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.174 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61318 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61318 ']' 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61318 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61318 00:07:02.432 killing process with pid 61318 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61318' 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61318 00:07:02.432 14:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61318 00:07:05.715 14:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:05.715 00:07:05.715 real 0m12.926s 00:07:05.715 user 0m17.066s 00:07:05.715 sys 0m3.838s 00:07:05.715 14:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.715 14:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:05.715 ************************************ 00:07:05.715 END TEST bdev_nbd 00:07:05.715 ************************************ 00:07:05.715 14:43:50 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:05.716 14:43:50 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:05.716 14:43:50 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:05.716 skipping fio tests on NVMe due to multi-ns failures. 00:07:05.716 14:43:50 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:05.716 14:43:50 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:05.716 14:43:50 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:05.716 14:43:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:05.716 14:43:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.716 14:43:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.716 ************************************ 00:07:05.716 START TEST bdev_verify 00:07:05.716 ************************************ 00:07:05.716 14:43:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:05.716 [2024-11-17 14:43:51.063976] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:05.716 [2024-11-17 14:43:51.064095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61736 ] 00:07:05.716 [2024-11-17 14:43:51.223253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.974 [2024-11-17 14:43:51.326895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.974 [2024-11-17 14:43:51.326917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.541 Running I/O for 5 seconds... 00:07:08.846 23488.00 IOPS, 91.75 MiB/s [2024-11-17T14:43:55.323Z] 20894.00 IOPS, 81.62 MiB/s [2024-11-17T14:43:56.256Z] 21760.00 IOPS, 85.00 MiB/s [2024-11-17T14:43:57.190Z] 22463.25 IOPS, 87.75 MiB/s [2024-11-17T14:43:57.190Z] 22655.80 IOPS, 88.50 MiB/s 00:07:11.647 Latency(us) 00:07:11.647 [2024-11-17T14:43:57.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.647 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x0 length 0xbd0bd 00:07:11.647 Nvme0n1 : 5.07 1628.65 6.36 0.00 0.00 78247.33 10334.52 235526.30 00:07:11.647 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:11.647 Nvme0n1 : 5.08 1561.88 6.10 0.00 0.00 81754.58 15526.99 232299.91 00:07:11.647 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x0 length 0x4ff80 00:07:11.647 Nvme1n1p1 : 5.07 1628.11 6.36 0.00 0.00 78150.87 10284.11 235526.30 00:07:11.647 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:11.647 Nvme1n1p1 : 5.08 1560.89 6.10 0.00 0.00 81554.85 17140.18 232299.91 00:07:11.647 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x0 length 0x4ff7f 00:07:11.647 Nvme1n1p2 : 5.07 1627.63 6.36 0.00 0.00 78060.95 9981.64 232299.91 00:07:11.647 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:11.647 Nvme1n1p2 : 5.09 1559.57 6.09 0.00 0.00 81425.93 18249.26 229073.53 00:07:11.647 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x0 length 0x80000 00:07:11.647 Nvme2n1 : 5.08 1636.89 6.39 0.00 0.00 77682.93 8116.38 230686.72 00:07:11.647 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x80000 length 0x80000 00:07:11.647 Nvme2n1 : 5.09 1558.28 6.09 0.00 0.00 81289.29 18047.61 224233.94 00:07:11.647 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x0 length 0x80000 00:07:11.647 Nvme2n2 : 5.08 1636.42 6.39 0.00 0.00 77556.75 8217.21 229073.53 00:07:11.647 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x80000 length 0x80000 00:07:11.647 Nvme2n2 : 5.10 1557.15 6.08 0.00 0.00 81149.33 15930.29 222620.75 00:07:11.647 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x0 length 0x80000 00:07:11.647 Nvme2n3 : 5.09 1635.06 6.39 0.00 0.00 77446.37 11342.77 225847.14 00:07:11.647 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x80000 length 0x80000 00:07:11.647 Nvme2n3 : 5.08 1550.04 6.05 0.00 0.00 81448.46 10737.82 250045.05 00:07:11.647 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x0 length 0x20000 00:07:11.647 Nvme3n1 : 5.09 1633.52 6.38 0.00 0.00 77339.77 11040.30 256497.82 00:07:11.647 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.647 Verification LBA range: start 0x20000 length 0x20000 00:07:11.647 Nvme3n1 : 5.10 1555.06 6.07 0.00 0.00 81004.98 4612.73 253271.43 00:07:11.647 [2024-11-17T14:43:57.190Z] =================================================================================================================== 00:07:11.647 [2024-11-17T14:43:57.190Z] Total : 22329.15 87.22 0.00 0.00 79538.32 4612.73 256497.82 00:07:13.022 00:07:13.022 real 0m7.252s 00:07:13.022 user 0m13.673s 00:07:13.022 sys 0m0.194s 00:07:13.022 ************************************ 00:07:13.022 END TEST bdev_verify 00:07:13.022 ************************************ 00:07:13.022 14:43:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.022 14:43:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:13.022 14:43:58 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.022 14:43:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:13.022 14:43:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.022 14:43:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.022 ************************************ 00:07:13.022 START TEST bdev_verify_big_io 00:07:13.022 ************************************ 00:07:13.022 14:43:58 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.022 [2024-11-17 14:43:58.360793] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:13.022 [2024-11-17 14:43:58.360934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61834 ] 00:07:13.022 [2024-11-17 14:43:58.521440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.280 [2024-11-17 14:43:58.600049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.280 [2024-11-17 14:43:58.600143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.846 Running I/O for 5 seconds... 00:07:18.974 1957.00 IOPS, 122.31 MiB/s [2024-11-17T14:44:05.452Z] 2592.00 IOPS, 162.00 MiB/s [2024-11-17T14:44:05.452Z] 2897.67 IOPS, 181.10 MiB/s 00:07:19.909 Latency(us) 00:07:19.909 [2024-11-17T14:44:05.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.910 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x0 length 0xbd0b 00:07:19.910 Nvme0n1 : 5.71 112.14 7.01 0.00 0.00 1104619.28 22988.01 1129235.69 00:07:19.910 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:19.910 Nvme0n1 : 6.01 125.73 7.86 0.00 0.00 843975.52 13308.85 2129415.88 00:07:19.910 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x0 length 0x4ff8 00:07:19.910 Nvme1n1p1 : 5.79 111.07 6.94 0.00 0.00 1071656.21 112116.97 1038896.84 00:07:19.910 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:19.910 Nvme1n1p1 : 6.05 157.33 9.83 0.00 0.00 665653.12 806.60 1871304.86 00:07:19.910 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x0 length 0x4ff7 00:07:19.910 Nvme1n1p2 : 5.79 112.90 7.06 0.00 0.00 1042724.78 80256.39 1180857.90 00:07:19.910 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:19.910 Nvme1n1p2 : 5.89 103.72 6.48 0.00 0.00 1163998.40 11443.59 1348630.06 00:07:19.910 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x0 length 0x8000 00:07:19.910 Nvme2n1 : 5.89 118.18 7.39 0.00 0.00 971262.64 61301.37 1200216.22 00:07:19.910 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x8000 length 0x8000 00:07:19.910 Nvme2n1 : 5.75 101.60 6.35 0.00 0.00 1156726.63 85902.57 1271196.75 00:07:19.910 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x0 length 0x8000 00:07:19.910 Nvme2n2 : 5.89 119.69 7.48 0.00 0.00 933229.78 61704.66 1193763.45 00:07:19.910 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x8000 length 0x8000 00:07:19.910 Nvme2n2 : 5.89 105.89 6.62 0.00 0.00 1078129.25 120989.54 1303460.63 00:07:19.910 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x0 length 0x8000 00:07:19.910 Nvme2n3 : 5.93 129.53 8.10 0.00 0.00 848593.13 36296.86 1206669.00 00:07:19.910 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x8000 length 0x8000 00:07:19.910 Nvme2n3 : 5.98 110.77 6.92 0.00 0.00 1013755.72 15123.69 2051982.57 00:07:19.910 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x0 length 0x2000 00:07:19.910 Nvme3n1 : 5.94 140.07 8.75 0.00 0.00 766103.93 5646.18 1238932.87 00:07:19.910 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.910 Verification LBA range: start 0x2000 length 0x2000 00:07:19.910 Nvme3n1 : 5.95 110.17 6.89 0.00 0.00 989259.77 15123.69 2090699.22 00:07:19.910 [2024-11-17T14:44:05.453Z] =================================================================================================================== 00:07:19.910 [2024-11-17T14:44:05.453Z] Total : 1658.78 103.67 0.00 0.00 955975.58 806.60 2129415.88 00:07:21.284 00:07:21.284 real 0m8.347s 00:07:21.284 user 0m15.858s 00:07:21.284 sys 0m0.216s 00:07:21.284 14:44:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.284 14:44:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:21.284 ************************************ 00:07:21.284 END TEST bdev_verify_big_io 00:07:21.284 ************************************ 00:07:21.284 14:44:06 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.284 14:44:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:21.284 14:44:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.284 14:44:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.284 ************************************ 00:07:21.284 START TEST bdev_write_zeroes 00:07:21.284 ************************************ 00:07:21.284 14:44:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.284 [2024-11-17 14:44:06.739656] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:21.284 [2024-11-17 14:44:06.739769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61944 ] 00:07:21.542 [2024-11-17 14:44:06.905399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.542 [2024-11-17 14:44:06.998538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.108 Running I/O for 1 seconds... 00:07:23.301 67200.00 IOPS, 262.50 MiB/s 00:07:23.301 Latency(us) 00:07:23.301 [2024-11-17T14:44:08.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.301 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.301 Nvme0n1 : 1.02 9559.06 37.34 0.00 0.00 13361.46 10939.47 23996.26 00:07:23.301 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.301 Nvme1n1p1 : 1.03 9547.25 37.29 0.00 0.00 13357.50 10737.82 23592.96 00:07:23.301 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.301 Nvme1n1p2 : 1.03 9535.52 37.25 0.00 0.00 13344.78 10788.23 22786.36 00:07:23.301 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.301 Nvme2n1 : 1.03 9524.71 37.21 0.00 0.00 13324.59 9578.34 22181.42 00:07:23.301 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.301 Nvme2n2 : 1.03 9513.97 37.16 0.00 0.00 13312.13 8620.50 21778.12 00:07:23.301 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.301 Nvme2n3 : 1.03 9503.29 37.12 0.00 0.00 13294.11 7057.72 22282.24 00:07:23.301 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.301 Nvme3n1 : 1.03 9492.44 37.08 0.00 0.00 13283.42 6427.57 23996.26 00:07:23.301 [2024-11-17T14:44:08.844Z] =================================================================================================================== 00:07:23.301 [2024-11-17T14:44:08.844Z] Total : 66676.24 260.45 0.00 0.00 13325.43 6427.57 23996.26 00:07:23.867 00:07:23.867 real 0m2.653s 00:07:23.867 user 0m2.363s 00:07:23.867 sys 0m0.173s 00:07:23.867 14:44:09 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.867 ************************************ 00:07:23.867 END TEST bdev_write_zeroes 00:07:23.867 ************************************ 00:07:23.867 14:44:09 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:23.867 14:44:09 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.867 14:44:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:23.867 14:44:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.867 14:44:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.867 ************************************ 00:07:23.867 START TEST bdev_json_nonenclosed 00:07:23.867 ************************************ 00:07:23.867 14:44:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.125 [2024-11-17 14:44:09.438595] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:24.125 [2024-11-17 14:44:09.438706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61998 ] 00:07:24.125 [2024-11-17 14:44:09.597980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.384 [2024-11-17 14:44:09.690711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.384 [2024-11-17 14:44:09.690787] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:24.384 [2024-11-17 14:44:09.690803] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:24.384 [2024-11-17 14:44:09.690812] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.384 00:07:24.384 real 0m0.486s 00:07:24.384 user 0m0.290s 00:07:24.384 sys 0m0.092s 00:07:24.384 14:44:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.384 ************************************ 00:07:24.384 END TEST bdev_json_nonenclosed 00:07:24.384 ************************************ 00:07:24.384 14:44:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:24.384 14:44:09 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.384 14:44:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:24.384 14:44:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.384 14:44:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.384 ************************************ 00:07:24.384 START TEST bdev_json_nonarray 00:07:24.384 ************************************ 00:07:24.384 14:44:09 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.643 [2024-11-17 14:44:09.965620] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:24.644 [2024-11-17 14:44:09.965732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62018 ] 00:07:24.644 [2024-11-17 14:44:10.126067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.902 [2024-11-17 14:44:10.222436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.902 [2024-11-17 14:44:10.222517] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:24.902 [2024-11-17 14:44:10.222534] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:24.902 [2024-11-17 14:44:10.222543] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.902 00:07:24.902 real 0m0.495s 00:07:24.902 user 0m0.307s 00:07:24.902 sys 0m0.083s 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.902 ************************************ 00:07:24.902 END TEST bdev_json_nonarray 00:07:24.902 ************************************ 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:24.902 14:44:10 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:24.902 14:44:10 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:24.902 14:44:10 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:24.902 14:44:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.902 14:44:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.902 14:44:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.902 ************************************ 00:07:24.902 START TEST bdev_gpt_uuid 00:07:24.902 ************************************ 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62049 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62049 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62049 ']' 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:24.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.902 14:44:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:25.161 [2024-11-17 14:44:10.513617] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:25.161 [2024-11-17 14:44:10.513734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62049 ] 00:07:25.161 [2024-11-17 14:44:10.670361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.419 [2024-11-17 14:44:10.763981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.015 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.015 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:26.015 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:26.015 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.015 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.301 Some configs were skipped because the RPC state that can call them passed over. 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:26.301 { 00:07:26.301 "name": "Nvme1n1p1", 00:07:26.301 "aliases": [ 00:07:26.301 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:26.301 ], 00:07:26.301 "product_name": "GPT Disk", 00:07:26.301 "block_size": 4096, 00:07:26.301 "num_blocks": 655104, 00:07:26.301 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:26.301 "assigned_rate_limits": { 00:07:26.301 "rw_ios_per_sec": 0, 00:07:26.301 "rw_mbytes_per_sec": 0, 00:07:26.301 "r_mbytes_per_sec": 0, 00:07:26.301 "w_mbytes_per_sec": 0 00:07:26.301 }, 00:07:26.301 "claimed": false, 00:07:26.301 "zoned": false, 00:07:26.301 "supported_io_types": { 00:07:26.301 "read": true, 00:07:26.301 "write": true, 00:07:26.301 "unmap": true, 00:07:26.301 "flush": true, 00:07:26.301 "reset": true, 00:07:26.301 "nvme_admin": false, 00:07:26.301 "nvme_io": false, 00:07:26.301 "nvme_io_md": false, 00:07:26.301 "write_zeroes": true, 00:07:26.301 "zcopy": false, 00:07:26.301 "get_zone_info": false, 00:07:26.301 "zone_management": false, 00:07:26.301 "zone_append": false, 00:07:26.301 "compare": true, 00:07:26.301 "compare_and_write": false, 00:07:26.301 "abort": true, 00:07:26.301 "seek_hole": false, 00:07:26.301 "seek_data": false, 00:07:26.301 "copy": true, 00:07:26.301 "nvme_iov_md": false 00:07:26.301 }, 00:07:26.301 "driver_specific": { 00:07:26.301 "gpt": { 00:07:26.301 "base_bdev": "Nvme1n1", 00:07:26.301 "offset_blocks": 256, 00:07:26.301 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:26.301 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:26.301 "partition_name": "SPDK_TEST_first" 00:07:26.301 } 00:07:26.301 } 00:07:26.301 } 00:07:26.301 ]' 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:26.301 { 00:07:26.301 "name": "Nvme1n1p2", 00:07:26.301 "aliases": [ 00:07:26.301 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:26.301 ], 00:07:26.301 "product_name": "GPT Disk", 00:07:26.301 "block_size": 4096, 00:07:26.301 "num_blocks": 655103, 00:07:26.301 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:26.301 "assigned_rate_limits": { 00:07:26.301 "rw_ios_per_sec": 0, 00:07:26.301 "rw_mbytes_per_sec": 0, 00:07:26.301 "r_mbytes_per_sec": 0, 00:07:26.301 "w_mbytes_per_sec": 0 00:07:26.301 }, 00:07:26.301 "claimed": false, 00:07:26.301 "zoned": false, 00:07:26.301 "supported_io_types": { 00:07:26.301 "read": true, 00:07:26.301 "write": true, 00:07:26.301 "unmap": true, 00:07:26.301 "flush": true, 00:07:26.301 "reset": true, 00:07:26.301 "nvme_admin": false, 00:07:26.301 "nvme_io": false, 00:07:26.301 "nvme_io_md": false, 00:07:26.301 "write_zeroes": true, 00:07:26.301 "zcopy": false, 00:07:26.301 "get_zone_info": false, 00:07:26.301 "zone_management": false, 00:07:26.301 "zone_append": false, 00:07:26.301 "compare": true, 00:07:26.301 "compare_and_write": false, 00:07:26.301 "abort": true, 00:07:26.301 "seek_hole": false, 00:07:26.301 "seek_data": false, 00:07:26.301 "copy": true, 00:07:26.301 "nvme_iov_md": false 00:07:26.301 }, 00:07:26.301 "driver_specific": { 00:07:26.301 "gpt": { 00:07:26.301 "base_bdev": "Nvme1n1", 00:07:26.301 "offset_blocks": 655360, 00:07:26.301 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:26.301 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:26.301 "partition_name": "SPDK_TEST_second" 00:07:26.301 } 00:07:26.301 } 00:07:26.301 } 00:07:26.301 ]' 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:26.301 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62049 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62049 ']' 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62049 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62049 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.560 killing process with pid 62049 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62049' 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62049 00:07:26.560 14:44:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62049 00:07:27.934 00:07:27.934 real 0m2.951s 00:07:27.934 user 0m3.102s 00:07:27.934 sys 0m0.352s 00:07:27.934 14:44:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.934 14:44:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:27.934 ************************************ 00:07:27.934 END TEST bdev_gpt_uuid 00:07:27.934 ************************************ 00:07:27.934 14:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:27.934 14:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:27.934 14:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:27.934 14:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:27.934 14:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:27.934 14:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:27.934 14:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:27.934 14:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:27.935 14:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:28.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:28.451 Waiting for block devices as requested 00:07:28.451 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.451 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.709 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.709 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:33.974 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:33.974 14:44:19 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:33.974 14:44:19 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:33.974 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:33.974 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:33.974 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:33.974 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:33.974 14:44:19 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:33.974 00:07:33.974 real 0m56.525s 00:07:33.974 user 1m11.603s 00:07:33.974 sys 0m7.864s 00:07:33.974 14:44:19 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.974 ************************************ 00:07:33.974 END TEST blockdev_nvme_gpt 00:07:33.974 ************************************ 00:07:33.974 14:44:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.974 14:44:19 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:33.974 14:44:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.974 14:44:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.974 14:44:19 -- common/autotest_common.sh@10 -- # set +x 00:07:33.974 ************************************ 00:07:33.974 START TEST nvme 00:07:33.974 ************************************ 00:07:33.974 14:44:19 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:33.974 * Looking for test storage... 00:07:33.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:33.974 14:44:19 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.974 14:44:19 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.974 14:44:19 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:34.233 14:44:19 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:34.233 14:44:19 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.233 14:44:19 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.233 14:44:19 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.233 14:44:19 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.233 14:44:19 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.233 14:44:19 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.233 14:44:19 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.233 14:44:19 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.233 14:44:19 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.233 14:44:19 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.233 14:44:19 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.233 14:44:19 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:34.233 14:44:19 nvme -- scripts/common.sh@345 -- # : 1 00:07:34.233 14:44:19 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.233 14:44:19 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.233 14:44:19 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:34.233 14:44:19 nvme -- scripts/common.sh@353 -- # local d=1 00:07:34.233 14:44:19 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.233 14:44:19 nvme -- scripts/common.sh@355 -- # echo 1 00:07:34.233 14:44:19 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.233 14:44:19 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:34.233 14:44:19 nvme -- scripts/common.sh@353 -- # local d=2 00:07:34.233 14:44:19 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.233 14:44:19 nvme -- scripts/common.sh@355 -- # echo 2 00:07:34.233 14:44:19 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.233 14:44:19 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.233 14:44:19 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.233 14:44:19 nvme -- scripts/common.sh@368 -- # return 0 00:07:34.233 14:44:19 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.233 14:44:19 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:34.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.233 --rc genhtml_branch_coverage=1 00:07:34.233 --rc genhtml_function_coverage=1 00:07:34.233 --rc genhtml_legend=1 00:07:34.233 --rc geninfo_all_blocks=1 00:07:34.233 --rc geninfo_unexecuted_blocks=1 00:07:34.233 00:07:34.233 ' 00:07:34.233 14:44:19 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:34.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.233 --rc genhtml_branch_coverage=1 00:07:34.233 --rc genhtml_function_coverage=1 00:07:34.233 --rc genhtml_legend=1 00:07:34.233 --rc geninfo_all_blocks=1 00:07:34.233 --rc geninfo_unexecuted_blocks=1 00:07:34.233 00:07:34.233 ' 00:07:34.233 14:44:19 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:34.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.233 --rc genhtml_branch_coverage=1 00:07:34.233 --rc genhtml_function_coverage=1 00:07:34.233 --rc genhtml_legend=1 00:07:34.233 --rc geninfo_all_blocks=1 00:07:34.233 --rc geninfo_unexecuted_blocks=1 00:07:34.233 00:07:34.233 ' 00:07:34.233 14:44:19 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:34.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.233 --rc genhtml_branch_coverage=1 00:07:34.233 --rc genhtml_function_coverage=1 00:07:34.233 --rc genhtml_legend=1 00:07:34.233 --rc geninfo_all_blocks=1 00:07:34.233 --rc geninfo_unexecuted_blocks=1 00:07:34.233 00:07:34.233 ' 00:07:34.233 14:44:19 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:34.491 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:35.058 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.058 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.058 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.058 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.316 14:44:20 nvme -- nvme/nvme.sh@79 -- # uname 00:07:35.316 14:44:20 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:35.316 14:44:20 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:35.316 14:44:20 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:35.316 14:44:20 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:35.316 14:44:20 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:35.316 14:44:20 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:35.316 Waiting for stub to ready for secondary processes... 00:07:35.316 14:44:20 nvme -- common/autotest_common.sh@1075 -- # stubpid=62679 00:07:35.316 14:44:20 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:35.316 14:44:20 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:35.316 14:44:20 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62679 ]] 00:07:35.316 14:44:20 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:35.316 14:44:20 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:35.316 [2024-11-17 14:44:20.655845] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:35.316 [2024-11-17 14:44:20.655971] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:35.883 [2024-11-17 14:44:21.419940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.153 [2024-11-17 14:44:21.512460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.153 [2024-11-17 14:44:21.512705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.153 [2024-11-17 14:44:21.512721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.153 [2024-11-17 14:44:21.526789] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:36.153 [2024-11-17 14:44:21.526826] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:36.153 [2024-11-17 14:44:21.540680] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:36.153 [2024-11-17 14:44:21.540764] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:36.153 [2024-11-17 14:44:21.543019] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:36.153 [2024-11-17 14:44:21.543170] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:36.153 [2024-11-17 14:44:21.543220] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:36.153 [2024-11-17 14:44:21.546343] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:36.153 [2024-11-17 14:44:21.547627] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:36.153 [2024-11-17 14:44:21.547736] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:36.153 [2024-11-17 14:44:21.551839] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:36.153 [2024-11-17 14:44:21.552132] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:36.153 [2024-11-17 14:44:21.552237] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:36.153 [2024-11-17 14:44:21.552311] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:36.153 [2024-11-17 14:44:21.552370] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:36.153 done. 00:07:36.153 14:44:21 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:36.153 14:44:21 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:36.153 14:44:21 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:36.153 14:44:21 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:36.153 14:44:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.153 14:44:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.153 ************************************ 00:07:36.153 START TEST nvme_reset 00:07:36.153 ************************************ 00:07:36.153 14:44:21 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:36.449 Initializing NVMe Controllers 00:07:36.449 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:36.449 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:36.449 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:36.449 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:36.449 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:36.449 00:07:36.449 real 0m0.190s 00:07:36.449 user 0m0.066s 00:07:36.449 sys 0m0.088s 00:07:36.449 14:44:21 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.449 14:44:21 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:36.449 ************************************ 00:07:36.449 END TEST nvme_reset 00:07:36.449 ************************************ 00:07:36.449 14:44:21 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:36.449 14:44:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.449 14:44:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.449 14:44:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.449 ************************************ 00:07:36.449 START TEST nvme_identify 00:07:36.449 ************************************ 00:07:36.449 14:44:21 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:36.449 14:44:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:36.449 14:44:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:36.449 14:44:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:36.449 14:44:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:36.449 14:44:21 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:36.449 14:44:21 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:36.450 14:44:21 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:36.450 14:44:21 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:36.450 14:44:21 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:36.450 14:44:21 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:36.450 14:44:21 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:36.450 14:44:21 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:36.712 ===================================================== 00:07:36.712 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:36.712 ===================================================== 00:07:36.712 Controller Capabilities/Features 00:07:36.712 ================================ 00:07:36.712 Vendor ID: 1b36 00:07:36.712 Subsystem Vendor ID: 1af4 00:07:36.712 Serial Number: 12341 00:07:36.712 Model Number: QEMU NVMe Ctrl 00:07:36.712 Firmware Version: 8.0.0 00:07:36.712 Recommended Arb Burst: 6 00:07:36.712 IEEE OUI Identifier: 00 54 52 00:07:36.712 Multi-path I/O 00:07:36.712 May have multiple subsystem ports: No 00:07:36.712 May have multiple controllers: No 00:07:36.712 Associated with SR-IOV VF: No 00:07:36.712 Max Data Transfer Size: 524288 00:07:36.712 Max Number of Namespaces: 256 00:07:36.712 Max Number of I/O Queues: 64 00:07:36.712 NVMe Specification Version (VS): 1.4 00:07:36.712 NVMe Specification Version (Identify): 1.4 00:07:36.712 Maximum Queue Entries: 2048 00:07:36.712 Contiguous Queues Required: Yes 00:07:36.712 Arbitration Mechanisms Supported 00:07:36.712 Weighted Round Robin: Not Supported 00:07:36.712 Vendor Specific: Not Supported 00:07:36.712 Reset Timeout: 7500 ms 00:07:36.712 Doorbell Stride: 4 bytes 00:07:36.712 NVM Subsystem Reset: Not Supported 00:07:36.712 Command Sets Supported 00:07:36.712 NVM Command Set: Supported 00:07:36.712 Boot Partition: Not Supported 00:07:36.712 Memory Page Size Minimum: 4096 bytes 00:07:36.712 Memory Page Size Maximum: 65536 bytes 00:07:36.712 Persistent Memory Region: Not Supported 00:07:36.712 Optional Asynchronous Events Supported 00:07:36.712 Namespace Attribute Notices: Supported 00:07:36.712 Firmware Activation Notices: Not Supported 00:07:36.712 ANA Change Notices: Not Supported 00:07:36.712 PLE Aggregate Log Change Notices: Not Supported 00:07:36.712 LBA Status Info Alert Notices: Not Supported 00:07:36.712 EGE Aggregate Log Change Notices: Not Supported 00:07:36.712 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.712 Zone Descriptor Change Notices: Not Supported 00:07:36.712 Discovery Log Change Notices: Not Supported 00:07:36.712 Controller Attributes 00:07:36.712 128-bit Host Identifier: Not Supported 00:07:36.712 Non-Operational Permissive Mode: Not Supported 00:07:36.712 NVM Sets: Not Supported 00:07:36.712 Read Recovery Levels: Not Supported 00:07:36.712 Endurance Groups: Not Supported 00:07:36.712 Predictable Latency Mode: Not Supported 00:07:36.712 Traffic Based Keep ALive: Not Supported 00:07:36.712 Namespace Granularity: Not Supported 00:07:36.712 SQ Associations: Not Supported 00:07:36.712 UUID List: Not Supported 00:07:36.712 Multi-Domain Subsystem: Not Supported 00:07:36.712 Fixed Capacity Management: Not Supported 00:07:36.712 Variable Capacity Management: Not Supported 00:07:36.712 Delete Endurance Group: Not Supported 00:07:36.712 Delete NVM Set: Not Supported 00:07:36.712 Extended LBA Formats Supported: Supported 00:07:36.712 Flexible Data Placement Supported: Not Supported 00:07:36.712 00:07:36.712 Controller Memory Buffer Support 00:07:36.712 ================================ 00:07:36.712 Supported: No 00:07:36.712 00:07:36.712 Persistent Memory Region Support 00:07:36.712 ================================ 00:07:36.712 Supported: No 00:07:36.712 00:07:36.712 Admin Command Set Attributes 00:07:36.712 ============================ 00:07:36.712 Security Send/Receive: Not Supported 00:07:36.712 Format NVM: Supported 00:07:36.712 Firmware Activate/Download: Not Supported 00:07:36.712 Namespace Management: Supported 00:07:36.712 Device Self-Test: Not Supported 00:07:36.712 Directives: Supported 00:07:36.712 NVMe-MI: Not Supported 00:07:36.712 Virtualization Management: Not Supported 00:07:36.712 Doorbell Buffer Config: Supported 00:07:36.712 Get LBA Status Capability: Not Supported 00:07:36.712 Command & Feature Lockdown Capability: Not Supported 00:07:36.712 Abort Command Limit: 4 00:07:36.712 Async Event Request Limit: 4 00:07:36.712 Number of Firmware Slots: N/A 00:07:36.712 Firmware Slot 1 Read-Only: N/A 00:07:36.712 Firmware Activation Without Reset: N/A 00:07:36.712 Multiple Update Detection Support: N/A 00:07:36.712 Firmware Update Granularity: No Information Provided 00:07:36.712 Per-Namespace SMART Log: Yes 00:07:36.712 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.712 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:36.712 Command Effects Log Page: Supported 00:07:36.712 Get Log Page Extended Data: Supported 00:07:36.712 Telemetry Log Pages: Not Supported 00:07:36.712 Persistent Event Log Pages: Not Supported 00:07:36.713 Supported Log Pages Log Page: May Support 00:07:36.713 Commands Supported & Effects Log Page: Not Supported 00:07:36.713 Feature Identifiers & Effects Log Page:May Support 00:07:36.713 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.713 Data Area 4 for Telemetry Log: Not Supported 00:07:36.713 Error Log Page Entries Supported: 1 00:07:36.713 Keep Alive: Not Supported 00:07:36.713 00:07:36.713 NVM Command Set Attributes 00:07:36.713 ========================== 00:07:36.713 Submission Queue Entry Size 00:07:36.713 Max: 64 00:07:36.713 Min: 64 00:07:36.713 Completion Queue Entry Size 00:07:36.713 Max: 16 00:07:36.713 Min: 16 00:07:36.713 Number of Namespaces: 256 00:07:36.713 Compare Command: Supported 00:07:36.713 Write Uncorrectable Command: Not Supported 00:07:36.713 Dataset Management Command: Supported 00:07:36.713 Write Zeroes Command: Supported 00:07:36.713 Set Features Save Field: Supported 00:07:36.713 Reservations: Not Supported 00:07:36.713 Timestamp: Supported 00:07:36.713 Copy: Supported 00:07:36.713 Volatile Write Cache: Present 00:07:36.713 Atomic Write Unit (Normal): 1 00:07:36.713 Atomic Write Unit (PFail): 1 00:07:36.713 Atomic Compare & Write Unit: 1 00:07:36.713 Fused Compare & Write: Not Supported 00:07:36.713 Scatter-Gather List 00:07:36.713 SGL Command Set: Supported 00:07:36.713 SGL Keyed: Not Supported 00:07:36.713 SGL Bit Bucket Descriptor: Not Supported 00:07:36.713 SGL Metadata Pointer: Not Supported 00:07:36.713 Oversized SGL: Not Supported 00:07:36.713 SGL Metadata Address: Not Supported 00:07:36.713 SGL Offset: Not Supported 00:07:36.713 Transport SGL Data Block: Not Supported 00:07:36.713 Replay Protected Memory Block: Not Supported 00:07:36.713 00:07:36.713 Firmware Slot Information 00:07:36.713 ========================= 00:07:36.713 Active slot: 1 00:07:36.713 Slot 1 Firmware Revision: 1.0 00:07:36.713 00:07:36.713 00:07:36.713 Commands Supported and Effects 00:07:36.713 ============================== 00:07:36.713 Admin Commands 00:07:36.713 -------------- 00:07:36.713 Delete I/O Submission Queue (00h): Supported 00:07:36.713 Create I/O Submission Queue (01h): Supported 00:07:36.713 Get Log Page (02h): Supported 00:07:36.713 Delete I/O Completion Queue (04h): Supported 00:07:36.713 Create I/O Completion Queue (05h): Supported 00:07:36.713 Identify (06h): Supported 00:07:36.713 Abort (08h): Supported 00:07:36.713 Set Features (09h): Supported 00:07:36.713 Get Features (0Ah): Supported 00:07:36.713 Asynchronous Event Request (0Ch): Supported 00:07:36.713 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.713 Directive Send (19h): Supported 00:07:36.713 Directive Receive (1Ah): Supported 00:07:36.713 Virtualization Management (1Ch): Supported 00:07:36.713 Doorbell Buffer Config (7Ch): Supported 00:07:36.713 Format NVM (80h): Supported LBA-Change 00:07:36.713 I/O Commands 00:07:36.713 ------------ 00:07:36.713 Flush (00h): Supported LBA-Change 00:07:36.713 Write (01h): Supported LBA-Change 00:07:36.713 Read (02h): Supported 00:07:36.713 Compare (05h): Supported 00:07:36.713 Write Zeroes (08h): Supported LBA-Change 00:07:36.713 Dataset Management (09h): Supported LBA-Change 00:07:36.713 Unknown (0Ch): Supported 00:07:36.713 Unknown (12h): Supported 00:07:36.713 Copy (19h): Supported LBA-Change 00:07:36.713 Unknown (1Dh): Supported LBA-Change 00:07:36.713 00:07:36.713 Error Log 00:07:36.713 ========= 00:07:36.713 00:07:36.713 Arbitration 00:07:36.713 =========== 00:07:36.713 Arbitration Burst: no limit 00:07:36.713 00:07:36.713 Power Management 00:07:36.713 ================ 00:07:36.713 Number of Power States: 1 00:07:36.713 Current Power State: Power State #0 00:07:36.713 Power State #0: 00:07:36.713 Max Power: 25.00 W 00:07:36.713 Non-Operational State: Operational 00:07:36.713 Entry Latency: 16 microseconds 00:07:36.713 Exit Latency: 4 microseconds 00:07:36.713 Relative Read Throughput: 0 00:07:36.713 Relative Read Latency: 0 00:07:36.713 Relative Write Throughput: 0 00:07:36.713 Relative Write Latency: 0 00:07:36.713 Idle Power: Not Reported 00:07:36.713 Active Power: Not Reported 00:07:36.713 Non-Operational Permissive Mode: Not Supported 00:07:36.713 00:07:36.713 Health Information 00:07:36.713 ================== 00:07:36.713 Critical Warnings: 00:07:36.713 Available Spare Space: OK 00:07:36.713 Temperature: OK 00:07:36.713 Device Reliability: OK 00:07:36.713 Read Only: No 00:07:36.713 Volatile Memory Backup: OK 00:07:36.713 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.713 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.713 Available Spare: 0% 00:07:36.713 Available Spare Threshold: 0% 00:07:36.713 Life Percentage Used: 0% 00:07:36.713 Data Units Read: 1105 00:07:36.713 Data Units Written: 977 00:07:36.713 Host Read Commands: 57483 00:07:36.713 Host Write Commands: 56363 00:07:36.713 Controller Busy Time: 0 minutes 00:07:36.713 Power Cycles: 0 00:07:36.713 Power On Hours: 0 hours 00:07:36.713 Unsafe Shutdowns: 0 00:07:36.713 Unrecoverable Media Errors: 0 00:07:36.713 Lifetime Error Log Entries: 0 00:07:36.713 Warning Temperature Time: 0 minutes 00:07:36.713 Critical Temperature Time: 0 minutes 00:07:36.713 00:07:36.713 Number of Queues 00:07:36.713 ================ 00:07:36.713 Number of I/O Submission Queues: 64 00:07:36.713 Number of I/O Completion Queues: 64 00:07:36.713 00:07:36.713 ZNS Specific Controller Data 00:07:36.713 ============================ 00:07:36.713 Zone Append Size Limit: 0 00:07:36.713 00:07:36.713 00:07:36.713 Active Namespaces 00:07:36.713 ================= 00:07:36.713 Namespace ID:1 00:07:36.713 Error Recovery Timeout: Unlimited 00:07:36.713 Command Set Identifier: NVM (00h) 00:07:36.713 Deallocate: Supported 00:07:36.713 Deallocated/Unwritten Error: Supported 00:07:36.713 Deallocated Read Value: All 0x00 00:07:36.713 Deallocate in Write Zeroes: Not Supported 00:07:36.713 Deallocated Guard Field: 0xFFFF 00:07:36.713 Flush: Supported 00:07:36.713 Reservation: Not Supported 00:07:36.713 Namespace Sharing Capabilities: Private 00:07:36.713 Size (in LBAs): 1310720 (5GiB) 00:07:36.713 Capacity (in LBAs): 1310720 (5GiB) 00:07:36.713 Utilization (in LBAs): 1310720 (5GiB) 00:07:36.713 Thin Provisioning: Not Supported 00:07:36.713 Per-NS Atomic Units: No 00:07:36.713 Maximum Single Source Range Length: 128 00:07:36.713 Maximum Copy Length: 128 00:07:36.713 Maximum Source Range Count: 128 00:07:36.713 NGUID/EUI64 Never Reused: No 00:07:36.713 Namespace Write Protected: No 00:07:36.713 Number of LBA Formats: 8 00:07:36.713 Current LBA Format: LBA Format #04 00:07:36.713 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.713 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.713 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.713 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.713 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.713 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.713 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.713 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.713 00:07:36.713 NVM Specific Namespace Data 00:07:36.713 =========================== 00:07:36.713 Logical Block Storage Tag Mask: 0 00:07:36.713 Protection Information Capabilities: 00:07:36.713 16b Guard Protection Information Storage Tag Support: No 00:07:36.713 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.713 Storage Tag Check Read Support: No 00:07:36.713 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.713 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.713 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.713 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.713 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.713 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.713 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.713 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.713 ===================================================== 00:07:36.713 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:36.713 ===================================================== 00:07:36.713 Controller Capabilities/Features 00:07:36.713 ================================ 00:07:36.713 Vendor ID: 1b36 00:07:36.713 Subsystem Vendor ID: 1af4 00:07:36.713 Serial Number: 12343 00:07:36.714 Model Number: QEMU NVMe Ctrl 00:07:36.714 Firmware Version: 8.0.0 00:07:36.714 Recommended Arb Burst: 6 00:07:36.714 IEEE OUI Identifier: 00 54 52 00:07:36.714 Multi-path I/O 00:07:36.714 May have multiple subsystem ports: No 00:07:36.714 May have multiple controllers: Yes 00:07:36.714 Associated with SR-IOV VF: No 00:07:36.714 Max Data Transfer Size: 524288 00:07:36.714 Max Number of Namespaces: 256 00:07:36.714 Max Number of I/O Queues: 64 00:07:36.714 NVMe Specification Version (VS): 1.4 00:07:36.714 NVMe Specification Version (Identify): 1.4 00:07:36.714 Maximum Queue Entries: 2048 00:07:36.714 Contiguous Queues Required: Yes 00:07:36.714 Arbitration Mechanisms Supported 00:07:36.714 Weighted Round Robin: Not Supported 00:07:36.714 Vendor Specific: Not Supported 00:07:36.714 Reset Timeout: 7500 ms 00:07:36.714 Doorbell Stride: 4 bytes 00:07:36.714 NVM Subsystem Reset: Not Supported 00:07:36.714 Command Sets Supported 00:07:36.714 NVM Command Set: Supported 00:07:36.714 Boot Partition: Not Supported 00:07:36.714 Memory Page Size Minimum: 4096 bytes 00:07:36.714 Memory Page Size Maximum: 65536 bytes 00:07:36.714 Persistent Memory Region: Not Supported 00:07:36.714 Optional Asynchronous Events Supported 00:07:36.714 Namespace Attribute Notices: Supported 00:07:36.714 Firmware Activation Notices: Not Supported 00:07:36.714 ANA Change Notices: Not Supported 00:07:36.714 PLE Aggregate Log Change Notices: Not Supported 00:07:36.714 LBA Status Info Alert Notices: Not Supported 00:07:36.714 EGE Aggregate Log Change Notices: Not Supported 00:07:36.714 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.714 Zone Descriptor Change Notices: Not Supported 00:07:36.714 Discovery Log Change Notices: Not Supported 00:07:36.714 Controller Attributes 00:07:36.714 128-bit Host Identifier: Not Supported 00:07:36.714 Non-Operational Permissive Mode: Not Supported 00:07:36.714 NVM Sets: Not Supported 00:07:36.714 Read Recovery Levels: Not Supported 00:07:36.714 Endurance Groups: Supported 00:07:36.714 Predictable Latency Mode: Not Supported 00:07:36.714 Traffic Based Keep ALive: Not Supported 00:07:36.714 Namespace Granularity: Not Supported 00:07:36.714 SQ Associations: Not Supported 00:07:36.714 UUID List: Not Supported 00:07:36.714 Multi-Domain Subsystem: Not Supported 00:07:36.714 Fixed Capacity Management: Not Supported 00:07:36.714 Variable Capacity Management: Not Supported 00:07:36.714 Delete Endurance Group: Not Supported 00:07:36.714 Delete NVM Set: Not Supported 00:07:36.714 Extended LBA Formats Supported: Supported 00:07:36.714 Flexible Data Placement Supported: Supported 00:07:36.714 00:07:36.714 Controller Memory Buffer Support 00:07:36.714 ================================ 00:07:36.714 Supported: No 00:07:36.714 00:07:36.714 Persistent Memory Region Support 00:07:36.714 ================================ 00:07:36.714 Supported: No 00:07:36.714 00:07:36.714 Admin Command Set Attributes 00:07:36.714 ============================ 00:07:36.714 Security Send/Receive: Not Supported 00:07:36.714 Format NVM: Supported 00:07:36.714 Firmware Activate/Download: Not Supported 00:07:36.714 Namespace Management: Supported 00:07:36.714 Device Self-Test: Not Supported 00:07:36.714 Directives: Supported 00:07:36.714 NVMe-MI: Not Supported 00:07:36.714 Virtualization Management: Not Supported 00:07:36.714 Doorbell Buffer Config: Supported 00:07:36.714 Get LBA Status Capability: Not Supported 00:07:36.714 Command & Feature Lockdown Capability: Not Supported 00:07:36.714 Abort Command Limit: 4 00:07:36.714 Async Event Request Limit: 4 00:07:36.714 Number of Firmware Slots: N/A 00:07:36.714 Firmware Slot 1 Read-Only: N/A 00:07:36.714 Firmware Activation Without Reset: N/A 00:07:36.714 Multiple Update Detection Support: N/A 00:07:36.714 Firmware Update Granularity: No Information Provided 00:07:36.714 Per-Namespace SMART Log: Yes 00:07:36.714 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.714 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:36.714 Command Effects Log Page: Supported 00:07:36.714 Get Log Page Extended Data: Supported 00:07:36.714 Telemetry Log Pages: Not Supported 00:07:36.714 Persistent Event Log Pages: Not Supported 00:07:36.714 Supported Log Pages Log Page: May Support 00:07:36.714 Commands Supported & Effects Log Page: Not Supported 00:07:36.714 Feature Identifiers & Effects Log Page:May Support 00:07:36.714 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.714 Data Area 4 for Telemetry Log: Not Supported 00:07:36.714 Error Log Page Entries Supported: 1 00:07:36.714 Keep Alive: Not Supported 00:07:36.714 00:07:36.714 NVM Command Set Attributes 00:07:36.714 ========================== 00:07:36.714 Submission Queue Entry Size 00:07:36.714 Max: 64 00:07:36.714 Min: 64 00:07:36.714 Completion Queue Entry Size 00:07:36.714 Max: 16 00:07:36.714 Min: 16 00:07:36.714 Number of Namespaces: 256 00:07:36.714 Compare Command: Supported 00:07:36.714 Write Uncorrectable Command: Not Supported 00:07:36.714 Dataset Management Command: Supported 00:07:36.714 Write Zeroes Command: Supported 00:07:36.714 Set Features Save Field: Supported 00:07:36.714 Reservations: Not Supported 00:07:36.714 Timestamp: Supported 00:07:36.714 Copy: Supported 00:07:36.714 Volatile Write Cache: Present 00:07:36.714 Atomic Write Unit (Normal): 1 00:07:36.714 Atomic Write Unit (PFail): 1 00:07:36.714 Atomic Compare & Write Unit: 1 00:07:36.714 Fused Compare & Write: Not Supported 00:07:36.714 Scatter-Gather List 00:07:36.714 SGL Command Set: Supported 00:07:36.714 SGL Keyed: Not Supported 00:07:36.714 SGL Bit Bucket Descriptor: Not Supported 00:07:36.714 SGL Metadata Pointer: Not Supported 00:07:36.714 Oversized SGL: Not Supported 00:07:36.714 SGL Metadata Address: Not Supported 00:07:36.714 SGL Offset: Not Supported 00:07:36.714 Transport SGL Data Block: Not Supported 00:07:36.714 Replay Protected Memory Block: Not Supported 00:07:36.714 00:07:36.714 Firmware Slot Information 00:07:36.714 ========================= 00:07:36.714 Active slot: 1 00:07:36.714 Slot 1 Firmware Revision: 1.0 00:07:36.714 00:07:36.714 00:07:36.714 Commands Supported and Effects 00:07:36.714 ============================== 00:07:36.714 Admin Commands 00:07:36.714 -------------- 00:07:36.714 Delete I/O Submission Queue (00h): Supported 00:07:36.714 Create I/O Submission Queue (01h): Supported 00:07:36.714 Get Log Page (02h): Supported 00:07:36.714 Delete I/O Completion Queue (04h): Supported 00:07:36.714 Create I/O Completion Queue (05h): Supported 00:07:36.714 Identify (06h): Supported 00:07:36.714 Abort (08h): Supported 00:07:36.714 Set Features (09h): Supported 00:07:36.714 Get Features (0Ah): Supported 00:07:36.714 Asynchronous Event Request (0Ch): Supported 00:07:36.714 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.714 Directive Send (19h): Supported 00:07:36.714 Directive Receive (1Ah): Supported 00:07:36.714 Virtualization Management (1Ch): Supported 00:07:36.714 Doorbell Buffer Config (7Ch): Supported 00:07:36.714 Format NVM (80h): Supported LBA-Change 00:07:36.714 I/O Commands 00:07:36.714 ------------ 00:07:36.714 Flush (00h): Supported LBA-Change 00:07:36.714 Write (01h): Supported LBA-Change 00:07:36.714 Read (02h): Supported 00:07:36.714 Compare (05h): Supported 00:07:36.714 Write Zeroes (08h): Supported LBA-Change 00:07:36.714 Dataset Management (09h): Supported LBA-Change 00:07:36.714 Unknown (0Ch): Supported 00:07:36.714 Unknown (12h): Supported 00:07:36.714 Copy (19h): Supported LBA-Change 00:07:36.714 Unknown (1Dh): Supported LBA-Change 00:07:36.714 00:07:36.714 Error Log 00:07:36.714 ========= 00:07:36.714 00:07:36.714 Arbitration 00:07:36.714 =========== 00:07:36.714 Arbitration Burst: no limit 00:07:36.714 00:07:36.714 Power Management 00:07:36.714 ================ 00:07:36.714 Number of Power States: 1 00:07:36.714 Current Power State: Power State #0 00:07:36.714 Power State #0: 00:07:36.714 Max Power: 25.00 W 00:07:36.714 Non-Operational State: Operational 00:07:36.714 Entry Latency: 16 microseconds 00:07:36.714 Exit Latency: 4 microseconds 00:07:36.714 Relative Read Throughput: 0 00:07:36.714 Relative Read Latency: 0 00:07:36.714 Relative Write Throughput: 0 00:07:36.714 Relative Write Latency: 0 00:07:36.714 Idle Power: Not Reported 00:07:36.714 Active Power: Not Reported 00:07:36.714 Non-Operational Permissive Mode: Not Supported 00:07:36.714 00:07:36.714 Health Information 00:07:36.714 ================== 00:07:36.715 Critical Warnings: 00:07:36.715 Available Spare Space: OK 00:07:36.715 Temperature: OK 00:07:36.715 Device Reliability: OK 00:07:36.715 Read Only: No 00:07:36.715 Volatile Memory Backup: OK 00:07:36.715 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.715 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.715 Available Spare: 0% 00:07:36.715 Available Spare Threshold: 0% 00:07:36.715 Life Percentage Used: 0% 00:07:36.715 Data Units Read: 801 00:07:36.715 Data Units Written: 730 00:07:36.715 Host Read Commands: 39755 00:07:36.715 Host Write Commands: 39178 00:07:36.715 Controller Busy Time: 0 minutes 00:07:36.715 Power Cycles: 0 00:07:36.715 Power On Hours: 0 hours 00:07:36.715 Unsafe Shutdowns: 0 00:07:36.715 Unrecoverable Media Errors: 0 00:07:36.715 Lifetime Error Log Entries: 0 00:07:36.715 Warning Temperature Time: 0 minutes 00:07:36.715 Critical Temperature Time: 0 minutes 00:07:36.715 00:07:36.715 Number of Queues 00:07:36.715 ================ 00:07:36.715 Number of I/O Submission Queues: 64 00:07:36.715 Number of I/O Completion Queues: 64 00:07:36.715 00:07:36.715 ZNS Specific Controller Data 00:07:36.715 ============================ 00:07:36.715 Zone Append Size Limit: 0 00:07:36.715 00:07:36.715 00:07:36.715 Active Namespaces 00:07:36.715 ================= 00:07:36.715 Namespace ID:1 00:07:36.715 Error Recovery Timeout: Unlimited 00:07:36.715 Command Set Identifier: NVM (00h) 00:07:36.715 Deallocate: Supported 00:07:36.715 Deallocated/Unwritten Error: Supported 00:07:36.715 Deallocated Read Value: All 0x00 00:07:36.715 Deallocate in Write Zeroes: Not Supported 00:07:36.715 Deallocated Guard Field: 0xFFFF 00:07:36.715 Flush: Supported 00:07:36.715 Reservation: Not Supported 00:07:36.715 Namespace Sharing Capabilities: Multiple Controllers 00:07:36.715 Size (in LBAs): 262144 (1GiB) 00:07:36.715 Capacity (in LBAs): 262144 (1GiB) 00:07:36.715 Utilization (in LBAs): 262144 (1GiB) 00:07:36.715 Thin Provisioning: Not Supported 00:07:36.715 Per-NS Atomic Units: No 00:07:36.715 Maximum Single Source Range Length: 128 00:07:36.715 Maximum Copy Length: 128 00:07:36.715 Maximum Source Range Count: 128 00:07:36.715 NGUID/EUI64 Never Reused: No 00:07:36.715 Namespace Write Protected: No 00:07:36.715 Endurance group ID: 1 00:07:36.715 Number of LBA Formats: 8 00:07:36.715 Current LBA Format: LBA Format #04 00:07:36.715 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.715 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.715 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.715 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.715 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.715 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.715 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.715 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.715 00:07:36.715 Get Feature FDP: 00:07:36.715 ================ 00:07:36.715 Enabled: Yes 00:07:36.715 FDP configuration index: 0 00:07:36.715 00:07:36.715 FDP configurations log page 00:07:36.715 =========================== 00:07:36.715 Number of FDP configurations: 1 00:07:36.715 Version: 0 00:07:36.715 Size: 112 00:07:36.715 FDP Configuration Descriptor: 0 00:07:36.715 Descriptor Size: 96 00:07:36.715 Reclaim Group Identifier format: 2 00:07:36.715 FDP Volatile Write Cache: Not Present 00:07:36.715 FDP Configuration: Valid 00:07:36.715 Vendor Specific Size: 0 00:07:36.715 Number of Reclaim Groups: 2 00:07:36.715 Number of Recalim Unit Handles: 8 00:07:36.715 Max Placement Identifiers: 128 00:07:36.715 Number of Namespaces Suppprted: 256 00:07:36.715 Reclaim unit Nominal Size: 6000000 bytes 00:07:36.715 Estimated Reclaim Unit Time Limit: Not Reported 00:07:36.715 RUH Desc #000: RUH Type: Initially Isolated 00:07:36.715 RUH Desc #001: RUH Type: Initially Isolated 00:07:36.715 RUH Desc #002: RUH Type: Initially Isolated 00:07:36.715 RUH Desc #003: RUH Type: Initially Isolated 00:07:36.715 RUH Desc #004: RUH Type: Initially Isolated 00:07:36.715 RUH Desc #005: RUH Type: Initially Isolated 00:07:36.715 RUH Desc #006: RUH Type: Initially Isolated 00:07:36.715 RUH Desc #007: RUH Type: Initially Isolated 00:07:36.715 00:07:36.715 FDP reclaim unit handle usage log page 00:07:36.715 ====================================== 00:07:36.715 Number of Reclaim Unit Handles: 8 00:07:36.715 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:36.715 RUH Usage Desc #001: RUH Attributes: Unused 00:07:36.715 RUH Usage Desc #002: RUH Attributes: Unused 00:07:36.715 RUH Usage Desc #003: RUH Attributes: Unused 00:07:36.715 RUH Usage Desc #004: RUH Attributes: Unused 00:07:36.715 RUH Usage Desc #005: RUH Attributes: Unused 00:07:36.715 RUH Usage Desc #006: RUH Attributes: Unused 00:07:36.715 RUH Usage Desc #007: RUH Attributes: Unused 00:07:36.715 00:07:36.715 FDP statistics log page 00:07:36.715 ======================= 00:07:36.715 Host bytes with metadata written: 472158208 00:07:36.715 Media bytes with metadata written: 472215552 00:07:36.715 Media bytes erased: 0 00:07:36.715 00:07:36.715 FDP events log page 00:07:36.715 =================== 00:07:36.715 Number of FDP events: 0 00:07:36.715 00:07:36.715 NVM Specific Namespace Data 00:07:36.715 =========================== 00:07:36.715 Logical Block Storage Tag Mask: 0 00:07:36.715 Protection Information Capabilities: 00:07:36.715 16b Guard Protection Information Storage Tag Support: No 00:07:36.715 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.715 Storage Tag Check Read Support: No 00:07:36.715 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 ===================================================== 00:07:36.715 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:36.715 ===================================================== 00:07:36.715 Controller Capabilities/Features 00:07:36.715 ================================ 00:07:36.715 Vendor ID: 1b36 00:07:36.715 Subsystem Vendor ID: 1af4 00:07:36.715 Serial Number: 12340 00:07:36.715 Model Number: QEMU NVMe Ctrl 00:07:36.715 Firmware Version: 8.0.0 00:07:36.715 Recommended Arb Burst: 6 00:07:36.715 IEEE OUI Identifier: 00 54 52 00:07:36.715 Multi-path I/O 00:07:36.715 May have multiple subsystem ports: No 00:07:36.715 May have multiple controllers: No 00:07:36.715 Associated with SR-IOV VF: No 00:07:36.715 Max Data Transfer Size: 524288 00:07:36.715 Max Number of Namespaces: 256 00:07:36.715 Max Number of I/O Queues: 64 00:07:36.715 NVMe Specification Version (VS): 1.4 00:07:36.715 NVMe Specification Version (Identify): 1.4 00:07:36.715 Maximum Queue Entries: 2048 00:07:36.715 Contiguous Queues Required: Yes 00:07:36.715 Arbitration Mechanisms Supported 00:07:36.715 Weighted Round Robin: Not Supported 00:07:36.715 Vendor Specific: Not Supported 00:07:36.715 Reset Timeout: 7500 ms 00:07:36.715 Doorbell Stride: 4 bytes 00:07:36.715 NVM Subsystem Reset: Not Supported 00:07:36.715 Command Sets Supported 00:07:36.715 NVM Command Set: Supported 00:07:36.715 Boot Partition: Not Supported 00:07:36.715 Memory Page Size Minimum: 4096 bytes 00:07:36.715 Memory Page Size Maximum: 65536 bytes 00:07:36.715 Persistent Memory Region: Not Supported 00:07:36.715 Optional Asynchronous Events Supported 00:07:36.715 Namespace Attribute Notices: Supported 00:07:36.715 Firmware Activation Notices: Not Supported 00:07:36.715 ANA Change Notices: Not Supported 00:07:36.715 PLE Aggregate Log Change Notices: Not Supported 00:07:36.715 LBA Status Info Alert Notices: Not Supported 00:07:36.715 EGE Aggregate Log Change Notices: Not Supported 00:07:36.715 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.715 Zone Descriptor Change Notices: Not Supported 00:07:36.715 Discovery Log Change Notices: Not Supported 00:07:36.715 Controller Attributes 00:07:36.715 128-bit Host Identifier: Not Supported 00:07:36.715 Non-Operational Permissive Mode: Not Supported 00:07:36.715 NVM Sets: Not Supported 00:07:36.715 Read Recovery Levels: Not Supported 00:07:36.715 Endurance Groups: Not Supported 00:07:36.715 Predictable Latency Mode: Not Supported 00:07:36.715 Traffic Based Keep ALive: Not Supported 00:07:36.715 Namespace Granularity: Not Supported 00:07:36.715 SQ Associations: Not Supported 00:07:36.715 UUID List: Not Supported 00:07:36.716 Multi-Domain Subsystem: Not Supported 00:07:36.716 Fixed Capacity Management: Not Supported 00:07:36.716 Variable Capacity Management: Not Supported 00:07:36.716 Delete Endurance Group: Not Supported 00:07:36.716 Delete NVM Set: Not Supported 00:07:36.716 Extended LBA Formats Supported: Supported 00:07:36.716 Flexible Data Placement Supported: Not Supported 00:07:36.716 00:07:36.716 Controller Memory Buffer Support 00:07:36.716 ================================ 00:07:36.716 Supported: No 00:07:36.716 00:07:36.716 Persistent Memory Region Support 00:07:36.716 ================================ 00:07:36.716 Supported: No 00:07:36.716 00:07:36.716 Admin Command Set Attributes 00:07:36.716 ============================ 00:07:36.716 Security Send/Receive: Not Supported 00:07:36.716 Format NVM: Supported 00:07:36.716 Firmware Activate/Download: Not Supported 00:07:36.716 Namespace Management: Supported 00:07:36.716 Device Self-Test: Not Supported 00:07:36.716 Directives: Supported 00:07:36.716 NVMe-MI: Not Supported 00:07:36.716 Virtualization Management: Not Supported 00:07:36.716 Doorbell Buffer Config: Supported 00:07:36.716 Get LBA Status Capability: Not Supported 00:07:36.716 Command & Feature Lockdown Capability: Not Supported 00:07:36.716 Abort Command Limit: 4 00:07:36.716 Async Event Request Limit: 4 00:07:36.716 Number of Firmware Slots: N/A 00:07:36.716 Firmware Slot 1 Read-Only: N/A 00:07:36.716 Firmware Activation Without Reset: N/A 00:07:36.716 Multiple Update Detection Support: N/A 00:07:36.716 Firmware Update Granularity: No Information Provided 00:07:36.716 Per-Namespace SMART Log: Yes 00:07:36.716 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.716 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:36.716 Command Effects Log Page: Supported 00:07:36.716 Get Log Page Extended Data: Supported 00:07:36.716 Telemetry Log Pages: Not Supported 00:07:36.716 Persistent Event Log Pages: Not Supported 00:07:36.716 Supported Log Pages Log Page: May Support 00:07:36.716 Commands Supported & Effects Log Page: Not Supported 00:07:36.716 Feature Identifiers & Effects Log Page:May Support 00:07:36.716 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.716 Data Area 4 for Telemetry Log: Not Supported 00:07:36.716 Error Log Page Entries Supported: 1 00:07:36.716 Keep Alive: Not Supported 00:07:36.716 00:07:36.716 NVM Command Set Attributes 00:07:36.716 ========================== 00:07:36.716 Submission Queue Entry Size 00:07:36.716 Max: 64 00:07:36.716 Min: 64 00:07:36.716 Completion Queue Entry Size 00:07:36.716 Max: 16 00:07:36.716 Min: 16 00:07:36.716 Number of Namespaces: 256 00:07:36.716 Compare Command: Supported 00:07:36.716 Write Uncorrectable Command: Not Supported 00:07:36.716 Dataset Management Command: Supported 00:07:36.716 Write Zeroes Command: Supported 00:07:36.716 Set Features Save Field: Supported 00:07:36.716 Reservations: Not Supported 00:07:36.716 Timestamp: Supported 00:07:36.716 Copy: Supported 00:07:36.716 Volatile Write Cache: Present 00:07:36.716 Atomic Write Unit (Normal): 1 00:07:36.716 Atomic Write Unit (PFail): 1 00:07:36.716 Atomic Compare & Write Unit: 1 00:07:36.716 Fused Compare & Write: Not Supported 00:07:36.716 Scatter-Gather List 00:07:36.716 SGL Command Set: Supported 00:07:36.716 SGL Keyed: Not Supported 00:07:36.716 SGL Bit Bucket Descriptor: Not Supported 00:07:36.716 SGL Metadata Pointer: Not Supported 00:07:36.716 Oversized SGL: Not Supported 00:07:36.716 SGL Metadata Address: Not Supported 00:07:36.716 SGL Offset: Not Supported 00:07:36.716 Transport SGL Data Block: Not Supported 00:07:36.716 Replay Protected Memory Block: Not Supported 00:07:36.716 00:07:36.716 Firmware Slot Information 00:07:36.716 ========================= 00:07:36.716 Active slot: 1 00:07:36.716 Slot 1 Firmware Revision: 1.0 00:07:36.716 00:07:36.716 00:07:36.716 Commands Supported and Effects 00:07:36.716 ============================== 00:07:36.716 Admin Commands 00:07:36.716 -------------- 00:07:36.716 Delete I/O Submission Queue (00h): Supported 00:07:36.716 Create I/O Submission Queue (01h): Supported 00:07:36.716 Get Log Page (02h): Supported 00:07:36.716 Delete I/O Completion Queue (04h): Supported 00:07:36.716 Create I/O Completion Queue (05h): Supported 00:07:36.716 Identify (06h): Supported 00:07:36.716 Abort (08h): Supported 00:07:36.716 Set Features (09h): Supported 00:07:36.716 Get Features (0Ah): Supported 00:07:36.716 Asynchronous Event Request (0Ch): Supported 00:07:36.716 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.716 Directive Send (19h): Supported 00:07:36.716 Directive Receive (1Ah): Supported 00:07:36.716 Virtualization Management (1Ch): Supported 00:07:36.716 Doorbell Buffer Config (7Ch): Supported 00:07:36.716 Format NVM (80h): Supported LBA-Change 00:07:36.716 I/O Commands 00:07:36.716 ------------ 00:07:36.716 Flush (00h): Supported LBA-Change 00:07:36.716 Write (01h): Supported LBA-Change 00:07:36.716 Read (02h): Supported 00:07:36.716 Compare (05h): Supported 00:07:36.716 Write Zeroes (08h): Supported LBA-Change 00:07:36.716 Dataset Management (09h): Supported LBA-Change 00:07:36.716 Unknown (0Ch): Supported 00:07:36.716 Unknown (12h): Supported 00:07:36.716 Copy (19h): Supported LBA-Change 00:07:36.716 Unknown (1Dh): Supported LBA-Change 00:07:36.716 00:07:36.716 Error Log 00:07:36.716 ========= 00:07:36.716 00:07:36.716 Arbitration 00:07:36.716 =========== 00:07:36.716 Arbitration Burst: no limit 00:07:36.716 00:07:36.716 Power Management 00:07:36.716 ================ 00:07:36.716 Number of Power States: 1 00:07:36.716 Current Power State: Power State #0 00:07:36.716 Power State #0: 00:07:36.716 Max Power: 25.00 W 00:07:36.716 Non-Operational State: Operational 00:07:36.716 Entry Latency: 16 microseconds 00:07:36.716 Exit Latency: 4 microseconds 00:07:36.716 Relative Read Throughput: 0 00:07:36.716 Relative Read Latency: 0 00:07:36.716 Relative Write Throughput: 0 00:07:36.716 Relative Write Latency: 0 00:07:36.716 Idle Power: Not Reported 00:07:36.716 Active Power: Not Reported 00:07:36.716 Non-Operational Permissive Mode: Not Supported 00:07:36.716 00:07:36.716 Health Information 00:07:36.716 ================== 00:07:36.716 Critical Warnings: 00:07:36.716 Available Spare Space: OK 00:07:36.716 Temperature: OK 00:07:36.716 Device Reliability: OK 00:07:36.716 Read Only: No 00:07:36.716 Volatile Memory Backup: OK 00:07:36.716 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.716 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.716 Available Spare: 0% 00:07:36.716 Available Spare Threshold: 0% 00:07:36.716 Life Percentage Used: 0% 00:07:36.716 Data Units Read: 718 00:07:36.716 Data Units Written: 646 00:07:36.716 Host Read Commands: 38809 00:07:36.716 Host Write Commands: 38595 00:07:36.716 Controller Busy Time: 0 minutes 00:07:36.716 Power Cycles: 0 00:07:36.716 Power On Hours: 0 hours 00:07:36.716 Unsafe Shutdowns: 0 00:07:36.716 Unrecoverable Media Errors: 0 00:07:36.716 Lifetime Error Log Entries: 0 00:07:36.716 Warning Temperature Time: 0 minutes 00:07:36.716 Critical Temperature Time: 0 minutes 00:07:36.716 00:07:36.716 Number of Queues 00:07:36.716 ================ 00:07:36.716 Number of I/O Submission Queues: 64 00:07:36.716 Number of I/O Completion Queues: 64 00:07:36.716 00:07:36.716 ZNS Specific Controller Data 00:07:36.716 ============================ 00:07:36.716 Zone Append Size Limit: 0 00:07:36.716 00:07:36.716 00:07:36.716 Active Namespaces 00:07:36.716 ================= 00:07:36.716 Namespace ID:1 00:07:36.716 Error Recovery Timeout: Unlimited 00:07:36.716 Command Set Identifier: NVM (00h) 00:07:36.716 Deallocate: Supported 00:07:36.716 Deallocated/Unwritten Error: Supported 00:07:36.716 Deallocated Read Value: All 0x00 00:07:36.716 Deallocate in Write Zeroes: Not Supported 00:07:36.716 Deallocated Guard Field: 0xFFFF 00:07:36.716 Flush: Supported 00:07:36.716 Reservation: Not Supported 00:07:36.716 Metadata Transferred as: Separate Metadata Buffer 00:07:36.716 Namespace Sharing Capabilities: Private 00:07:36.716 Size (in LBAs): 1548666 (5GiB) 00:07:36.716 Capacity (in LBAs): 1548666 (5GiB) 00:07:36.716 Utilization (in LBAs): 1548666 (5GiB) 00:07:36.716 Thin Provisioning: Not Supported 00:07:36.716 Per-NS Atomic Units: No 00:07:36.716 Maximum Single Source Range Length: 128 00:07:36.716 Maximum Copy Length: 128 00:07:36.716 Maximum Source Range Count: 128 00:07:36.716 NGUID/EUI64 Never Reused: No 00:07:36.716 Namespace Write Protected: No 00:07:36.716 Number of LBA Formats: 8 00:07:36.717 Current LBA Format: LBA Format #07 00:07:36.717 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.717 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.717 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.717 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.717 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.717 LBA Forma[2024-11-17 14:44:22.130325] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62701 terminated unexpected 00:07:36.717 [2024-11-17 14:44:22.131419] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62701 terminated unexpected 00:07:36.717 [2024-11-17 14:44:22.133498] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62701 terminated unexpected 00:07:36.717 [2024-11-17 14:44:22.135362] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62701 terminated unexpected 00:07:36.717 t #05: Data Size: 4096 Metadata Size: 8 00:07:36.717 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.717 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.717 00:07:36.717 NVM Specific Namespace Data 00:07:36.717 =========================== 00:07:36.717 Logical Block Storage Tag Mask: 0 00:07:36.717 Protection Information Capabilities: 00:07:36.717 16b Guard Protection Information Storage Tag Support: No 00:07:36.717 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.717 Storage Tag Check Read Support: No 00:07:36.717 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.717 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.717 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.717 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.717 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.717 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.717 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.717 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.717 ===================================================== 00:07:36.717 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:36.717 ===================================================== 00:07:36.717 Controller Capabilities/Features 00:07:36.717 ================================ 00:07:36.717 Vendor ID: 1b36 00:07:36.717 Subsystem Vendor ID: 1af4 00:07:36.717 Serial Number: 12342 00:07:36.717 Model Number: QEMU NVMe Ctrl 00:07:36.717 Firmware Version: 8.0.0 00:07:36.717 Recommended Arb Burst: 6 00:07:36.717 IEEE OUI Identifier: 00 54 52 00:07:36.717 Multi-path I/O 00:07:36.717 May have multiple subsystem ports: No 00:07:36.717 May have multiple controllers: No 00:07:36.717 Associated with SR-IOV VF: No 00:07:36.717 Max Data Transfer Size: 524288 00:07:36.717 Max Number of Namespaces: 256 00:07:36.717 Max Number of I/O Queues: 64 00:07:36.717 NVMe Specification Version (VS): 1.4 00:07:36.717 NVMe Specification Version (Identify): 1.4 00:07:36.717 Maximum Queue Entries: 2048 00:07:36.717 Contiguous Queues Required: Yes 00:07:36.717 Arbitration Mechanisms Supported 00:07:36.717 Weighted Round Robin: Not Supported 00:07:36.717 Vendor Specific: Not Supported 00:07:36.717 Reset Timeout: 7500 ms 00:07:36.717 Doorbell Stride: 4 bytes 00:07:36.717 NVM Subsystem Reset: Not Supported 00:07:36.717 Command Sets Supported 00:07:36.717 NVM Command Set: Supported 00:07:36.717 Boot Partition: Not Supported 00:07:36.717 Memory Page Size Minimum: 4096 bytes 00:07:36.717 Memory Page Size Maximum: 65536 bytes 00:07:36.717 Persistent Memory Region: Not Supported 00:07:36.717 Optional Asynchronous Events Supported 00:07:36.717 Namespace Attribute Notices: Supported 00:07:36.717 Firmware Activation Notices: Not Supported 00:07:36.717 ANA Change Notices: Not Supported 00:07:36.717 PLE Aggregate Log Change Notices: Not Supported 00:07:36.717 LBA Status Info Alert Notices: Not Supported 00:07:36.717 EGE Aggregate Log Change Notices: Not Supported 00:07:36.717 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.717 Zone Descriptor Change Notices: Not Supported 00:07:36.717 Discovery Log Change Notices: Not Supported 00:07:36.717 Controller Attributes 00:07:36.717 128-bit Host Identifier: Not Supported 00:07:36.717 Non-Operational Permissive Mode: Not Supported 00:07:36.717 NVM Sets: Not Supported 00:07:36.717 Read Recovery Levels: Not Supported 00:07:36.717 Endurance Groups: Not Supported 00:07:36.717 Predictable Latency Mode: Not Supported 00:07:36.717 Traffic Based Keep ALive: Not Supported 00:07:36.717 Namespace Granularity: Not Supported 00:07:36.717 SQ Associations: Not Supported 00:07:36.717 UUID List: Not Supported 00:07:36.717 Multi-Domain Subsystem: Not Supported 00:07:36.717 Fixed Capacity Management: Not Supported 00:07:36.717 Variable Capacity Management: Not Supported 00:07:36.717 Delete Endurance Group: Not Supported 00:07:36.717 Delete NVM Set: Not Supported 00:07:36.717 Extended LBA Formats Supported: Supported 00:07:36.717 Flexible Data Placement Supported: Not Supported 00:07:36.717 00:07:36.717 Controller Memory Buffer Support 00:07:36.717 ================================ 00:07:36.717 Supported: No 00:07:36.717 00:07:36.717 Persistent Memory Region Support 00:07:36.717 ================================ 00:07:36.717 Supported: No 00:07:36.717 00:07:36.717 Admin Command Set Attributes 00:07:36.717 ============================ 00:07:36.717 Security Send/Receive: Not Supported 00:07:36.717 Format NVM: Supported 00:07:36.717 Firmware Activate/Download: Not Supported 00:07:36.717 Namespace Management: Supported 00:07:36.717 Device Self-Test: Not Supported 00:07:36.717 Directives: Supported 00:07:36.717 NVMe-MI: Not Supported 00:07:36.717 Virtualization Management: Not Supported 00:07:36.717 Doorbell Buffer Config: Supported 00:07:36.717 Get LBA Status Capability: Not Supported 00:07:36.717 Command & Feature Lockdown Capability: Not Supported 00:07:36.717 Abort Command Limit: 4 00:07:36.717 Async Event Request Limit: 4 00:07:36.717 Number of Firmware Slots: N/A 00:07:36.717 Firmware Slot 1 Read-Only: N/A 00:07:36.717 Firmware Activation Without Reset: N/A 00:07:36.717 Multiple Update Detection Support: N/A 00:07:36.717 Firmware Update Granularity: No Information Provided 00:07:36.717 Per-Namespace SMART Log: Yes 00:07:36.717 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.717 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:36.717 Command Effects Log Page: Supported 00:07:36.717 Get Log Page Extended Data: Supported 00:07:36.717 Telemetry Log Pages: Not Supported 00:07:36.717 Persistent Event Log Pages: Not Supported 00:07:36.717 Supported Log Pages Log Page: May Support 00:07:36.717 Commands Supported & Effects Log Page: Not Supported 00:07:36.717 Feature Identifiers & Effects Log Page:May Support 00:07:36.717 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.717 Data Area 4 for Telemetry Log: Not Supported 00:07:36.717 Error Log Page Entries Supported: 1 00:07:36.717 Keep Alive: Not Supported 00:07:36.717 00:07:36.717 NVM Command Set Attributes 00:07:36.717 ========================== 00:07:36.717 Submission Queue Entry Size 00:07:36.717 Max: 64 00:07:36.717 Min: 64 00:07:36.717 Completion Queue Entry Size 00:07:36.717 Max: 16 00:07:36.717 Min: 16 00:07:36.717 Number of Namespaces: 256 00:07:36.717 Compare Command: Supported 00:07:36.717 Write Uncorrectable Command: Not Supported 00:07:36.717 Dataset Management Command: Supported 00:07:36.717 Write Zeroes Command: Supported 00:07:36.717 Set Features Save Field: Supported 00:07:36.717 Reservations: Not Supported 00:07:36.717 Timestamp: Supported 00:07:36.717 Copy: Supported 00:07:36.717 Volatile Write Cache: Present 00:07:36.718 Atomic Write Unit (Normal): 1 00:07:36.718 Atomic Write Unit (PFail): 1 00:07:36.718 Atomic Compare & Write Unit: 1 00:07:36.718 Fused Compare & Write: Not Supported 00:07:36.718 Scatter-Gather List 00:07:36.718 SGL Command Set: Supported 00:07:36.718 SGL Keyed: Not Supported 00:07:36.718 SGL Bit Bucket Descriptor: Not Supported 00:07:36.718 SGL Metadata Pointer: Not Supported 00:07:36.718 Oversized SGL: Not Supported 00:07:36.718 SGL Metadata Address: Not Supported 00:07:36.718 SGL Offset: Not Supported 00:07:36.718 Transport SGL Data Block: Not Supported 00:07:36.718 Replay Protected Memory Block: Not Supported 00:07:36.718 00:07:36.718 Firmware Slot Information 00:07:36.718 ========================= 00:07:36.718 Active slot: 1 00:07:36.718 Slot 1 Firmware Revision: 1.0 00:07:36.718 00:07:36.718 00:07:36.718 Commands Supported and Effects 00:07:36.718 ============================== 00:07:36.718 Admin Commands 00:07:36.718 -------------- 00:07:36.718 Delete I/O Submission Queue (00h): Supported 00:07:36.718 Create I/O Submission Queue (01h): Supported 00:07:36.718 Get Log Page (02h): Supported 00:07:36.718 Delete I/O Completion Queue (04h): Supported 00:07:36.718 Create I/O Completion Queue (05h): Supported 00:07:36.718 Identify (06h): Supported 00:07:36.718 Abort (08h): Supported 00:07:36.718 Set Features (09h): Supported 00:07:36.718 Get Features (0Ah): Supported 00:07:36.718 Asynchronous Event Request (0Ch): Supported 00:07:36.718 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.718 Directive Send (19h): Supported 00:07:36.718 Directive Receive (1Ah): Supported 00:07:36.718 Virtualization Management (1Ch): Supported 00:07:36.718 Doorbell Buffer Config (7Ch): Supported 00:07:36.718 Format NVM (80h): Supported LBA-Change 00:07:36.718 I/O Commands 00:07:36.718 ------------ 00:07:36.718 Flush (00h): Supported LBA-Change 00:07:36.718 Write (01h): Supported LBA-Change 00:07:36.718 Read (02h): Supported 00:07:36.718 Compare (05h): Supported 00:07:36.718 Write Zeroes (08h): Supported LBA-Change 00:07:36.718 Dataset Management (09h): Supported LBA-Change 00:07:36.718 Unknown (0Ch): Supported 00:07:36.718 Unknown (12h): Supported 00:07:36.718 Copy (19h): Supported LBA-Change 00:07:36.718 Unknown (1Dh): Supported LBA-Change 00:07:36.718 00:07:36.718 Error Log 00:07:36.718 ========= 00:07:36.718 00:07:36.718 Arbitration 00:07:36.718 =========== 00:07:36.718 Arbitration Burst: no limit 00:07:36.718 00:07:36.718 Power Management 00:07:36.718 ================ 00:07:36.718 Number of Power States: 1 00:07:36.718 Current Power State: Power State #0 00:07:36.718 Power State #0: 00:07:36.718 Max Power: 25.00 W 00:07:36.718 Non-Operational State: Operational 00:07:36.718 Entry Latency: 16 microseconds 00:07:36.718 Exit Latency: 4 microseconds 00:07:36.718 Relative Read Throughput: 0 00:07:36.718 Relative Read Latency: 0 00:07:36.718 Relative Write Throughput: 0 00:07:36.718 Relative Write Latency: 0 00:07:36.718 Idle Power: Not Reported 00:07:36.718 Active Power: Not Reported 00:07:36.718 Non-Operational Permissive Mode: Not Supported 00:07:36.718 00:07:36.718 Health Information 00:07:36.718 ================== 00:07:36.718 Critical Warnings: 00:07:36.718 Available Spare Space: OK 00:07:36.718 Temperature: OK 00:07:36.718 Device Reliability: OK 00:07:36.718 Read Only: No 00:07:36.718 Volatile Memory Backup: OK 00:07:36.718 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.718 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.718 Available Spare: 0% 00:07:36.718 Available Spare Threshold: 0% 00:07:36.718 Life Percentage Used: 0% 00:07:36.718 Data Units Read: 2213 00:07:36.718 Data Units Written: 2001 00:07:36.718 Host Read Commands: 117433 00:07:36.718 Host Write Commands: 115702 00:07:36.718 Controller Busy Time: 0 minutes 00:07:36.718 Power Cycles: 0 00:07:36.718 Power On Hours: 0 hours 00:07:36.718 Unsafe Shutdowns: 0 00:07:36.718 Unrecoverable Media Errors: 0 00:07:36.718 Lifetime Error Log Entries: 0 00:07:36.718 Warning Temperature Time: 0 minutes 00:07:36.718 Critical Temperature Time: 0 minutes 00:07:36.718 00:07:36.718 Number of Queues 00:07:36.718 ================ 00:07:36.718 Number of I/O Submission Queues: 64 00:07:36.718 Number of I/O Completion Queues: 64 00:07:36.718 00:07:36.718 ZNS Specific Controller Data 00:07:36.718 ============================ 00:07:36.718 Zone Append Size Limit: 0 00:07:36.718 00:07:36.718 00:07:36.718 Active Namespaces 00:07:36.718 ================= 00:07:36.718 Namespace ID:1 00:07:36.718 Error Recovery Timeout: Unlimited 00:07:36.718 Command Set Identifier: NVM (00h) 00:07:36.718 Deallocate: Supported 00:07:36.718 Deallocated/Unwritten Error: Supported 00:07:36.718 Deallocated Read Value: All 0x00 00:07:36.718 Deallocate in Write Zeroes: Not Supported 00:07:36.718 Deallocated Guard Field: 0xFFFF 00:07:36.718 Flush: Supported 00:07:36.718 Reservation: Not Supported 00:07:36.718 Namespace Sharing Capabilities: Private 00:07:36.718 Size (in LBAs): 1048576 (4GiB) 00:07:36.718 Capacity (in LBAs): 1048576 (4GiB) 00:07:36.718 Utilization (in LBAs): 1048576 (4GiB) 00:07:36.718 Thin Provisioning: Not Supported 00:07:36.718 Per-NS Atomic Units: No 00:07:36.718 Maximum Single Source Range Length: 128 00:07:36.718 Maximum Copy Length: 128 00:07:36.718 Maximum Source Range Count: 128 00:07:36.718 NGUID/EUI64 Never Reused: No 00:07:36.718 Namespace Write Protected: No 00:07:36.718 Number of LBA Formats: 8 00:07:36.718 Current LBA Format: LBA Format #04 00:07:36.718 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.718 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.718 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.718 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.718 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.718 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.718 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.718 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.718 00:07:36.718 NVM Specific Namespace Data 00:07:36.718 =========================== 00:07:36.718 Logical Block Storage Tag Mask: 0 00:07:36.718 Protection Information Capabilities: 00:07:36.718 16b Guard Protection Information Storage Tag Support: No 00:07:36.718 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.718 Storage Tag Check Read Support: No 00:07:36.718 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.718 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.718 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.718 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.718 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.718 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.718 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.718 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.718 Namespace ID:2 00:07:36.718 Error Recovery Timeout: Unlimited 00:07:36.718 Command Set Identifier: NVM (00h) 00:07:36.718 Deallocate: Supported 00:07:36.718 Deallocated/Unwritten Error: Supported 00:07:36.718 Deallocated Read Value: All 0x00 00:07:36.718 Deallocate in Write Zeroes: Not Supported 00:07:36.718 Deallocated Guard Field: 0xFFFF 00:07:36.718 Flush: Supported 00:07:36.718 Reservation: Not Supported 00:07:36.718 Namespace Sharing Capabilities: Private 00:07:36.718 Size (in LBAs): 1048576 (4GiB) 00:07:36.718 Capacity (in LBAs): 1048576 (4GiB) 00:07:36.718 Utilization (in LBAs): 1048576 (4GiB) 00:07:36.718 Thin Provisioning: Not Supported 00:07:36.718 Per-NS Atomic Units: No 00:07:36.718 Maximum Single Source Range Length: 128 00:07:36.718 Maximum Copy Length: 128 00:07:36.718 Maximum Source Range Count: 128 00:07:36.718 NGUID/EUI64 Never Reused: No 00:07:36.718 Namespace Write Protected: No 00:07:36.718 Number of LBA Formats: 8 00:07:36.718 Current LBA Format: LBA Format #04 00:07:36.718 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.718 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.718 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.718 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.718 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.718 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.718 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.718 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.718 00:07:36.719 NVM Specific Namespace Data 00:07:36.719 =========================== 00:07:36.719 Logical Block Storage Tag Mask: 0 00:07:36.719 Protection Information Capabilities: 00:07:36.719 16b Guard Protection Information Storage Tag Support: No 00:07:36.719 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.719 Storage Tag Check Read Support: No 00:07:36.719 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Namespace ID:3 00:07:36.719 Error Recovery Timeout: Unlimited 00:07:36.719 Command Set Identifier: NVM (00h) 00:07:36.719 Deallocate: Supported 00:07:36.719 Deallocated/Unwritten Error: Supported 00:07:36.719 Deallocated Read Value: All 0x00 00:07:36.719 Deallocate in Write Zeroes: Not Supported 00:07:36.719 Deallocated Guard Field: 0xFFFF 00:07:36.719 Flush: Supported 00:07:36.719 Reservation: Not Supported 00:07:36.719 Namespace Sharing Capabilities: Private 00:07:36.719 Size (in LBAs): 1048576 (4GiB) 00:07:36.719 Capacity (in LBAs): 1048576 (4GiB) 00:07:36.719 Utilization (in LBAs): 1048576 (4GiB) 00:07:36.719 Thin Provisioning: Not Supported 00:07:36.719 Per-NS Atomic Units: No 00:07:36.719 Maximum Single Source Range Length: 128 00:07:36.719 Maximum Copy Length: 128 00:07:36.719 Maximum Source Range Count: 128 00:07:36.719 NGUID/EUI64 Never Reused: No 00:07:36.719 Namespace Write Protected: No 00:07:36.719 Number of LBA Formats: 8 00:07:36.719 Current LBA Format: LBA Format #04 00:07:36.719 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.719 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.719 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.719 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.719 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.719 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.719 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.719 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.719 00:07:36.719 NVM Specific Namespace Data 00:07:36.719 =========================== 00:07:36.719 Logical Block Storage Tag Mask: 0 00:07:36.719 Protection Information Capabilities: 00:07:36.719 16b Guard Protection Information Storage Tag Support: No 00:07:36.719 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.719 Storage Tag Check Read Support: No 00:07:36.719 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.719 14:44:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:36.719 14:44:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:36.981 ===================================================== 00:07:36.981 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:36.981 ===================================================== 00:07:36.981 Controller Capabilities/Features 00:07:36.981 ================================ 00:07:36.981 Vendor ID: 1b36 00:07:36.981 Subsystem Vendor ID: 1af4 00:07:36.981 Serial Number: 12340 00:07:36.981 Model Number: QEMU NVMe Ctrl 00:07:36.981 Firmware Version: 8.0.0 00:07:36.981 Recommended Arb Burst: 6 00:07:36.981 IEEE OUI Identifier: 00 54 52 00:07:36.981 Multi-path I/O 00:07:36.981 May have multiple subsystem ports: No 00:07:36.981 May have multiple controllers: No 00:07:36.981 Associated with SR-IOV VF: No 00:07:36.981 Max Data Transfer Size: 524288 00:07:36.981 Max Number of Namespaces: 256 00:07:36.981 Max Number of I/O Queues: 64 00:07:36.981 NVMe Specification Version (VS): 1.4 00:07:36.981 NVMe Specification Version (Identify): 1.4 00:07:36.981 Maximum Queue Entries: 2048 00:07:36.981 Contiguous Queues Required: Yes 00:07:36.981 Arbitration Mechanisms Supported 00:07:36.981 Weighted Round Robin: Not Supported 00:07:36.981 Vendor Specific: Not Supported 00:07:36.981 Reset Timeout: 7500 ms 00:07:36.981 Doorbell Stride: 4 bytes 00:07:36.981 NVM Subsystem Reset: Not Supported 00:07:36.981 Command Sets Supported 00:07:36.981 NVM Command Set: Supported 00:07:36.981 Boot Partition: Not Supported 00:07:36.981 Memory Page Size Minimum: 4096 bytes 00:07:36.981 Memory Page Size Maximum: 65536 bytes 00:07:36.981 Persistent Memory Region: Not Supported 00:07:36.981 Optional Asynchronous Events Supported 00:07:36.981 Namespace Attribute Notices: Supported 00:07:36.981 Firmware Activation Notices: Not Supported 00:07:36.981 ANA Change Notices: Not Supported 00:07:36.981 PLE Aggregate Log Change Notices: Not Supported 00:07:36.981 LBA Status Info Alert Notices: Not Supported 00:07:36.981 EGE Aggregate Log Change Notices: Not Supported 00:07:36.981 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.981 Zone Descriptor Change Notices: Not Supported 00:07:36.981 Discovery Log Change Notices: Not Supported 00:07:36.981 Controller Attributes 00:07:36.981 128-bit Host Identifier: Not Supported 00:07:36.981 Non-Operational Permissive Mode: Not Supported 00:07:36.981 NVM Sets: Not Supported 00:07:36.981 Read Recovery Levels: Not Supported 00:07:36.981 Endurance Groups: Not Supported 00:07:36.981 Predictable Latency Mode: Not Supported 00:07:36.981 Traffic Based Keep ALive: Not Supported 00:07:36.981 Namespace Granularity: Not Supported 00:07:36.981 SQ Associations: Not Supported 00:07:36.981 UUID List: Not Supported 00:07:36.981 Multi-Domain Subsystem: Not Supported 00:07:36.981 Fixed Capacity Management: Not Supported 00:07:36.981 Variable Capacity Management: Not Supported 00:07:36.981 Delete Endurance Group: Not Supported 00:07:36.981 Delete NVM Set: Not Supported 00:07:36.981 Extended LBA Formats Supported: Supported 00:07:36.981 Flexible Data Placement Supported: Not Supported 00:07:36.981 00:07:36.981 Controller Memory Buffer Support 00:07:36.981 ================================ 00:07:36.981 Supported: No 00:07:36.981 00:07:36.981 Persistent Memory Region Support 00:07:36.981 ================================ 00:07:36.981 Supported: No 00:07:36.981 00:07:36.981 Admin Command Set Attributes 00:07:36.981 ============================ 00:07:36.981 Security Send/Receive: Not Supported 00:07:36.981 Format NVM: Supported 00:07:36.981 Firmware Activate/Download: Not Supported 00:07:36.981 Namespace Management: Supported 00:07:36.981 Device Self-Test: Not Supported 00:07:36.981 Directives: Supported 00:07:36.981 NVMe-MI: Not Supported 00:07:36.981 Virtualization Management: Not Supported 00:07:36.981 Doorbell Buffer Config: Supported 00:07:36.981 Get LBA Status Capability: Not Supported 00:07:36.981 Command & Feature Lockdown Capability: Not Supported 00:07:36.981 Abort Command Limit: 4 00:07:36.981 Async Event Request Limit: 4 00:07:36.981 Number of Firmware Slots: N/A 00:07:36.981 Firmware Slot 1 Read-Only: N/A 00:07:36.981 Firmware Activation Without Reset: N/A 00:07:36.981 Multiple Update Detection Support: N/A 00:07:36.981 Firmware Update Granularity: No Information Provided 00:07:36.981 Per-Namespace SMART Log: Yes 00:07:36.981 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.981 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:36.981 Command Effects Log Page: Supported 00:07:36.981 Get Log Page Extended Data: Supported 00:07:36.981 Telemetry Log Pages: Not Supported 00:07:36.981 Persistent Event Log Pages: Not Supported 00:07:36.981 Supported Log Pages Log Page: May Support 00:07:36.981 Commands Supported & Effects Log Page: Not Supported 00:07:36.981 Feature Identifiers & Effects Log Page:May Support 00:07:36.981 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.981 Data Area 4 for Telemetry Log: Not Supported 00:07:36.981 Error Log Page Entries Supported: 1 00:07:36.981 Keep Alive: Not Supported 00:07:36.981 00:07:36.981 NVM Command Set Attributes 00:07:36.981 ========================== 00:07:36.981 Submission Queue Entry Size 00:07:36.981 Max: 64 00:07:36.981 Min: 64 00:07:36.981 Completion Queue Entry Size 00:07:36.981 Max: 16 00:07:36.981 Min: 16 00:07:36.981 Number of Namespaces: 256 00:07:36.981 Compare Command: Supported 00:07:36.981 Write Uncorrectable Command: Not Supported 00:07:36.981 Dataset Management Command: Supported 00:07:36.981 Write Zeroes Command: Supported 00:07:36.981 Set Features Save Field: Supported 00:07:36.981 Reservations: Not Supported 00:07:36.982 Timestamp: Supported 00:07:36.982 Copy: Supported 00:07:36.982 Volatile Write Cache: Present 00:07:36.982 Atomic Write Unit (Normal): 1 00:07:36.982 Atomic Write Unit (PFail): 1 00:07:36.982 Atomic Compare & Write Unit: 1 00:07:36.982 Fused Compare & Write: Not Supported 00:07:36.982 Scatter-Gather List 00:07:36.982 SGL Command Set: Supported 00:07:36.982 SGL Keyed: Not Supported 00:07:36.982 SGL Bit Bucket Descriptor: Not Supported 00:07:36.982 SGL Metadata Pointer: Not Supported 00:07:36.982 Oversized SGL: Not Supported 00:07:36.982 SGL Metadata Address: Not Supported 00:07:36.982 SGL Offset: Not Supported 00:07:36.982 Transport SGL Data Block: Not Supported 00:07:36.982 Replay Protected Memory Block: Not Supported 00:07:36.982 00:07:36.982 Firmware Slot Information 00:07:36.982 ========================= 00:07:36.982 Active slot: 1 00:07:36.982 Slot 1 Firmware Revision: 1.0 00:07:36.982 00:07:36.982 00:07:36.982 Commands Supported and Effects 00:07:36.982 ============================== 00:07:36.982 Admin Commands 00:07:36.982 -------------- 00:07:36.982 Delete I/O Submission Queue (00h): Supported 00:07:36.982 Create I/O Submission Queue (01h): Supported 00:07:36.982 Get Log Page (02h): Supported 00:07:36.982 Delete I/O Completion Queue (04h): Supported 00:07:36.982 Create I/O Completion Queue (05h): Supported 00:07:36.982 Identify (06h): Supported 00:07:36.982 Abort (08h): Supported 00:07:36.982 Set Features (09h): Supported 00:07:36.982 Get Features (0Ah): Supported 00:07:36.982 Asynchronous Event Request (0Ch): Supported 00:07:36.982 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.982 Directive Send (19h): Supported 00:07:36.982 Directive Receive (1Ah): Supported 00:07:36.982 Virtualization Management (1Ch): Supported 00:07:36.982 Doorbell Buffer Config (7Ch): Supported 00:07:36.982 Format NVM (80h): Supported LBA-Change 00:07:36.982 I/O Commands 00:07:36.982 ------------ 00:07:36.982 Flush (00h): Supported LBA-Change 00:07:36.982 Write (01h): Supported LBA-Change 00:07:36.982 Read (02h): Supported 00:07:36.982 Compare (05h): Supported 00:07:36.982 Write Zeroes (08h): Supported LBA-Change 00:07:36.982 Dataset Management (09h): Supported LBA-Change 00:07:36.982 Unknown (0Ch): Supported 00:07:36.982 Unknown (12h): Supported 00:07:36.982 Copy (19h): Supported LBA-Change 00:07:36.982 Unknown (1Dh): Supported LBA-Change 00:07:36.982 00:07:36.982 Error Log 00:07:36.982 ========= 00:07:36.982 00:07:36.982 Arbitration 00:07:36.982 =========== 00:07:36.982 Arbitration Burst: no limit 00:07:36.982 00:07:36.982 Power Management 00:07:36.982 ================ 00:07:36.982 Number of Power States: 1 00:07:36.982 Current Power State: Power State #0 00:07:36.982 Power State #0: 00:07:36.982 Max Power: 25.00 W 00:07:36.982 Non-Operational State: Operational 00:07:36.982 Entry Latency: 16 microseconds 00:07:36.982 Exit Latency: 4 microseconds 00:07:36.982 Relative Read Throughput: 0 00:07:36.982 Relative Read Latency: 0 00:07:36.982 Relative Write Throughput: 0 00:07:36.982 Relative Write Latency: 0 00:07:36.982 Idle Power: Not Reported 00:07:36.982 Active Power: Not Reported 00:07:36.982 Non-Operational Permissive Mode: Not Supported 00:07:36.982 00:07:36.982 Health Information 00:07:36.982 ================== 00:07:36.982 Critical Warnings: 00:07:36.982 Available Spare Space: OK 00:07:36.982 Temperature: OK 00:07:36.982 Device Reliability: OK 00:07:36.982 Read Only: No 00:07:36.982 Volatile Memory Backup: OK 00:07:36.982 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.982 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.982 Available Spare: 0% 00:07:36.982 Available Spare Threshold: 0% 00:07:36.982 Life Percentage Used: 0% 00:07:36.982 Data Units Read: 718 00:07:36.982 Data Units Written: 646 00:07:36.982 Host Read Commands: 38809 00:07:36.982 Host Write Commands: 38595 00:07:36.982 Controller Busy Time: 0 minutes 00:07:36.982 Power Cycles: 0 00:07:36.982 Power On Hours: 0 hours 00:07:36.982 Unsafe Shutdowns: 0 00:07:36.982 Unrecoverable Media Errors: 0 00:07:36.982 Lifetime Error Log Entries: 0 00:07:36.982 Warning Temperature Time: 0 minutes 00:07:36.982 Critical Temperature Time: 0 minutes 00:07:36.982 00:07:36.982 Number of Queues 00:07:36.982 ================ 00:07:36.982 Number of I/O Submission Queues: 64 00:07:36.982 Number of I/O Completion Queues: 64 00:07:36.982 00:07:36.982 ZNS Specific Controller Data 00:07:36.982 ============================ 00:07:36.982 Zone Append Size Limit: 0 00:07:36.982 00:07:36.982 00:07:36.982 Active Namespaces 00:07:36.982 ================= 00:07:36.982 Namespace ID:1 00:07:36.982 Error Recovery Timeout: Unlimited 00:07:36.982 Command Set Identifier: NVM (00h) 00:07:36.982 Deallocate: Supported 00:07:36.982 Deallocated/Unwritten Error: Supported 00:07:36.982 Deallocated Read Value: All 0x00 00:07:36.982 Deallocate in Write Zeroes: Not Supported 00:07:36.982 Deallocated Guard Field: 0xFFFF 00:07:36.982 Flush: Supported 00:07:36.982 Reservation: Not Supported 00:07:36.982 Metadata Transferred as: Separate Metadata Buffer 00:07:36.982 Namespace Sharing Capabilities: Private 00:07:36.982 Size (in LBAs): 1548666 (5GiB) 00:07:36.982 Capacity (in LBAs): 1548666 (5GiB) 00:07:36.982 Utilization (in LBAs): 1548666 (5GiB) 00:07:36.982 Thin Provisioning: Not Supported 00:07:36.982 Per-NS Atomic Units: No 00:07:36.982 Maximum Single Source Range Length: 128 00:07:36.982 Maximum Copy Length: 128 00:07:36.982 Maximum Source Range Count: 128 00:07:36.982 NGUID/EUI64 Never Reused: No 00:07:36.982 Namespace Write Protected: No 00:07:36.982 Number of LBA Formats: 8 00:07:36.982 Current LBA Format: LBA Format #07 00:07:36.982 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.982 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.982 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.982 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.982 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.982 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.982 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.982 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.982 00:07:36.982 NVM Specific Namespace Data 00:07:36.982 =========================== 00:07:36.982 Logical Block Storage Tag Mask: 0 00:07:36.982 Protection Information Capabilities: 00:07:36.982 16b Guard Protection Information Storage Tag Support: No 00:07:36.982 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.982 Storage Tag Check Read Support: No 00:07:36.982 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.982 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.982 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.982 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.982 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.982 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.982 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.982 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.982 14:44:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:36.982 14:44:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:37.244 ===================================================== 00:07:37.244 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:37.244 ===================================================== 00:07:37.244 Controller Capabilities/Features 00:07:37.244 ================================ 00:07:37.244 Vendor ID: 1b36 00:07:37.244 Subsystem Vendor ID: 1af4 00:07:37.244 Serial Number: 12341 00:07:37.244 Model Number: QEMU NVMe Ctrl 00:07:37.244 Firmware Version: 8.0.0 00:07:37.244 Recommended Arb Burst: 6 00:07:37.244 IEEE OUI Identifier: 00 54 52 00:07:37.244 Multi-path I/O 00:07:37.244 May have multiple subsystem ports: No 00:07:37.244 May have multiple controllers: No 00:07:37.244 Associated with SR-IOV VF: No 00:07:37.244 Max Data Transfer Size: 524288 00:07:37.244 Max Number of Namespaces: 256 00:07:37.244 Max Number of I/O Queues: 64 00:07:37.244 NVMe Specification Version (VS): 1.4 00:07:37.244 NVMe Specification Version (Identify): 1.4 00:07:37.244 Maximum Queue Entries: 2048 00:07:37.244 Contiguous Queues Required: Yes 00:07:37.244 Arbitration Mechanisms Supported 00:07:37.244 Weighted Round Robin: Not Supported 00:07:37.244 Vendor Specific: Not Supported 00:07:37.244 Reset Timeout: 7500 ms 00:07:37.244 Doorbell Stride: 4 bytes 00:07:37.244 NVM Subsystem Reset: Not Supported 00:07:37.244 Command Sets Supported 00:07:37.244 NVM Command Set: Supported 00:07:37.244 Boot Partition: Not Supported 00:07:37.244 Memory Page Size Minimum: 4096 bytes 00:07:37.245 Memory Page Size Maximum: 65536 bytes 00:07:37.245 Persistent Memory Region: Not Supported 00:07:37.245 Optional Asynchronous Events Supported 00:07:37.245 Namespace Attribute Notices: Supported 00:07:37.245 Firmware Activation Notices: Not Supported 00:07:37.245 ANA Change Notices: Not Supported 00:07:37.245 PLE Aggregate Log Change Notices: Not Supported 00:07:37.245 LBA Status Info Alert Notices: Not Supported 00:07:37.245 EGE Aggregate Log Change Notices: Not Supported 00:07:37.245 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.245 Zone Descriptor Change Notices: Not Supported 00:07:37.245 Discovery Log Change Notices: Not Supported 00:07:37.245 Controller Attributes 00:07:37.245 128-bit Host Identifier: Not Supported 00:07:37.245 Non-Operational Permissive Mode: Not Supported 00:07:37.245 NVM Sets: Not Supported 00:07:37.245 Read Recovery Levels: Not Supported 00:07:37.245 Endurance Groups: Not Supported 00:07:37.245 Predictable Latency Mode: Not Supported 00:07:37.245 Traffic Based Keep ALive: Not Supported 00:07:37.245 Namespace Granularity: Not Supported 00:07:37.245 SQ Associations: Not Supported 00:07:37.245 UUID List: Not Supported 00:07:37.245 Multi-Domain Subsystem: Not Supported 00:07:37.245 Fixed Capacity Management: Not Supported 00:07:37.245 Variable Capacity Management: Not Supported 00:07:37.245 Delete Endurance Group: Not Supported 00:07:37.245 Delete NVM Set: Not Supported 00:07:37.245 Extended LBA Formats Supported: Supported 00:07:37.245 Flexible Data Placement Supported: Not Supported 00:07:37.245 00:07:37.245 Controller Memory Buffer Support 00:07:37.245 ================================ 00:07:37.245 Supported: No 00:07:37.245 00:07:37.245 Persistent Memory Region Support 00:07:37.245 ================================ 00:07:37.245 Supported: No 00:07:37.245 00:07:37.245 Admin Command Set Attributes 00:07:37.245 ============================ 00:07:37.245 Security Send/Receive: Not Supported 00:07:37.245 Format NVM: Supported 00:07:37.245 Firmware Activate/Download: Not Supported 00:07:37.245 Namespace Management: Supported 00:07:37.245 Device Self-Test: Not Supported 00:07:37.245 Directives: Supported 00:07:37.245 NVMe-MI: Not Supported 00:07:37.245 Virtualization Management: Not Supported 00:07:37.245 Doorbell Buffer Config: Supported 00:07:37.245 Get LBA Status Capability: Not Supported 00:07:37.245 Command & Feature Lockdown Capability: Not Supported 00:07:37.245 Abort Command Limit: 4 00:07:37.245 Async Event Request Limit: 4 00:07:37.245 Number of Firmware Slots: N/A 00:07:37.245 Firmware Slot 1 Read-Only: N/A 00:07:37.245 Firmware Activation Without Reset: N/A 00:07:37.245 Multiple Update Detection Support: N/A 00:07:37.245 Firmware Update Granularity: No Information Provided 00:07:37.245 Per-Namespace SMART Log: Yes 00:07:37.245 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.245 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:37.245 Command Effects Log Page: Supported 00:07:37.245 Get Log Page Extended Data: Supported 00:07:37.245 Telemetry Log Pages: Not Supported 00:07:37.245 Persistent Event Log Pages: Not Supported 00:07:37.245 Supported Log Pages Log Page: May Support 00:07:37.245 Commands Supported & Effects Log Page: Not Supported 00:07:37.245 Feature Identifiers & Effects Log Page:May Support 00:07:37.245 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.245 Data Area 4 for Telemetry Log: Not Supported 00:07:37.245 Error Log Page Entries Supported: 1 00:07:37.245 Keep Alive: Not Supported 00:07:37.245 00:07:37.245 NVM Command Set Attributes 00:07:37.245 ========================== 00:07:37.245 Submission Queue Entry Size 00:07:37.245 Max: 64 00:07:37.245 Min: 64 00:07:37.245 Completion Queue Entry Size 00:07:37.245 Max: 16 00:07:37.245 Min: 16 00:07:37.245 Number of Namespaces: 256 00:07:37.245 Compare Command: Supported 00:07:37.245 Write Uncorrectable Command: Not Supported 00:07:37.245 Dataset Management Command: Supported 00:07:37.245 Write Zeroes Command: Supported 00:07:37.245 Set Features Save Field: Supported 00:07:37.245 Reservations: Not Supported 00:07:37.245 Timestamp: Supported 00:07:37.245 Copy: Supported 00:07:37.245 Volatile Write Cache: Present 00:07:37.245 Atomic Write Unit (Normal): 1 00:07:37.245 Atomic Write Unit (PFail): 1 00:07:37.245 Atomic Compare & Write Unit: 1 00:07:37.245 Fused Compare & Write: Not Supported 00:07:37.245 Scatter-Gather List 00:07:37.245 SGL Command Set: Supported 00:07:37.245 SGL Keyed: Not Supported 00:07:37.245 SGL Bit Bucket Descriptor: Not Supported 00:07:37.245 SGL Metadata Pointer: Not Supported 00:07:37.245 Oversized SGL: Not Supported 00:07:37.245 SGL Metadata Address: Not Supported 00:07:37.245 SGL Offset: Not Supported 00:07:37.245 Transport SGL Data Block: Not Supported 00:07:37.245 Replay Protected Memory Block: Not Supported 00:07:37.245 00:07:37.245 Firmware Slot Information 00:07:37.245 ========================= 00:07:37.245 Active slot: 1 00:07:37.245 Slot 1 Firmware Revision: 1.0 00:07:37.245 00:07:37.245 00:07:37.245 Commands Supported and Effects 00:07:37.245 ============================== 00:07:37.245 Admin Commands 00:07:37.245 -------------- 00:07:37.245 Delete I/O Submission Queue (00h): Supported 00:07:37.245 Create I/O Submission Queue (01h): Supported 00:07:37.245 Get Log Page (02h): Supported 00:07:37.245 Delete I/O Completion Queue (04h): Supported 00:07:37.245 Create I/O Completion Queue (05h): Supported 00:07:37.245 Identify (06h): Supported 00:07:37.245 Abort (08h): Supported 00:07:37.245 Set Features (09h): Supported 00:07:37.245 Get Features (0Ah): Supported 00:07:37.245 Asynchronous Event Request (0Ch): Supported 00:07:37.245 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.245 Directive Send (19h): Supported 00:07:37.245 Directive Receive (1Ah): Supported 00:07:37.245 Virtualization Management (1Ch): Supported 00:07:37.245 Doorbell Buffer Config (7Ch): Supported 00:07:37.245 Format NVM (80h): Supported LBA-Change 00:07:37.245 I/O Commands 00:07:37.245 ------------ 00:07:37.245 Flush (00h): Supported LBA-Change 00:07:37.245 Write (01h): Supported LBA-Change 00:07:37.245 Read (02h): Supported 00:07:37.245 Compare (05h): Supported 00:07:37.245 Write Zeroes (08h): Supported LBA-Change 00:07:37.245 Dataset Management (09h): Supported LBA-Change 00:07:37.245 Unknown (0Ch): Supported 00:07:37.245 Unknown (12h): Supported 00:07:37.245 Copy (19h): Supported LBA-Change 00:07:37.245 Unknown (1Dh): Supported LBA-Change 00:07:37.245 00:07:37.245 Error Log 00:07:37.245 ========= 00:07:37.245 00:07:37.245 Arbitration 00:07:37.245 =========== 00:07:37.245 Arbitration Burst: no limit 00:07:37.245 00:07:37.245 Power Management 00:07:37.245 ================ 00:07:37.245 Number of Power States: 1 00:07:37.245 Current Power State: Power State #0 00:07:37.245 Power State #0: 00:07:37.245 Max Power: 25.00 W 00:07:37.245 Non-Operational State: Operational 00:07:37.245 Entry Latency: 16 microseconds 00:07:37.245 Exit Latency: 4 microseconds 00:07:37.245 Relative Read Throughput: 0 00:07:37.245 Relative Read Latency: 0 00:07:37.245 Relative Write Throughput: 0 00:07:37.245 Relative Write Latency: 0 00:07:37.245 Idle Power: Not Reported 00:07:37.245 Active Power: Not Reported 00:07:37.245 Non-Operational Permissive Mode: Not Supported 00:07:37.245 00:07:37.245 Health Information 00:07:37.245 ================== 00:07:37.245 Critical Warnings: 00:07:37.245 Available Spare Space: OK 00:07:37.245 Temperature: OK 00:07:37.245 Device Reliability: OK 00:07:37.245 Read Only: No 00:07:37.245 Volatile Memory Backup: OK 00:07:37.245 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.245 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.245 Available Spare: 0% 00:07:37.245 Available Spare Threshold: 0% 00:07:37.245 Life Percentage Used: 0% 00:07:37.245 Data Units Read: 1105 00:07:37.245 Data Units Written: 977 00:07:37.245 Host Read Commands: 57483 00:07:37.245 Host Write Commands: 56363 00:07:37.245 Controller Busy Time: 0 minutes 00:07:37.245 Power Cycles: 0 00:07:37.245 Power On Hours: 0 hours 00:07:37.245 Unsafe Shutdowns: 0 00:07:37.245 Unrecoverable Media Errors: 0 00:07:37.245 Lifetime Error Log Entries: 0 00:07:37.245 Warning Temperature Time: 0 minutes 00:07:37.245 Critical Temperature Time: 0 minutes 00:07:37.246 00:07:37.246 Number of Queues 00:07:37.246 ================ 00:07:37.246 Number of I/O Submission Queues: 64 00:07:37.246 Number of I/O Completion Queues: 64 00:07:37.246 00:07:37.246 ZNS Specific Controller Data 00:07:37.246 ============================ 00:07:37.246 Zone Append Size Limit: 0 00:07:37.246 00:07:37.246 00:07:37.246 Active Namespaces 00:07:37.246 ================= 00:07:37.246 Namespace ID:1 00:07:37.246 Error Recovery Timeout: Unlimited 00:07:37.246 Command Set Identifier: NVM (00h) 00:07:37.246 Deallocate: Supported 00:07:37.246 Deallocated/Unwritten Error: Supported 00:07:37.246 Deallocated Read Value: All 0x00 00:07:37.246 Deallocate in Write Zeroes: Not Supported 00:07:37.246 Deallocated Guard Field: 0xFFFF 00:07:37.246 Flush: Supported 00:07:37.246 Reservation: Not Supported 00:07:37.246 Namespace Sharing Capabilities: Private 00:07:37.246 Size (in LBAs): 1310720 (5GiB) 00:07:37.246 Capacity (in LBAs): 1310720 (5GiB) 00:07:37.246 Utilization (in LBAs): 1310720 (5GiB) 00:07:37.246 Thin Provisioning: Not Supported 00:07:37.246 Per-NS Atomic Units: No 00:07:37.246 Maximum Single Source Range Length: 128 00:07:37.246 Maximum Copy Length: 128 00:07:37.246 Maximum Source Range Count: 128 00:07:37.246 NGUID/EUI64 Never Reused: No 00:07:37.246 Namespace Write Protected: No 00:07:37.246 Number of LBA Formats: 8 00:07:37.246 Current LBA Format: LBA Format #04 00:07:37.246 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.246 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.246 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.246 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.246 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.246 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.246 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.246 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.246 00:07:37.246 NVM Specific Namespace Data 00:07:37.246 =========================== 00:07:37.246 Logical Block Storage Tag Mask: 0 00:07:37.246 Protection Information Capabilities: 00:07:37.246 16b Guard Protection Information Storage Tag Support: No 00:07:37.246 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.246 Storage Tag Check Read Support: No 00:07:37.246 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.246 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.246 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.246 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.246 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.246 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.246 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.246 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.246 14:44:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:37.246 14:44:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:37.506 ===================================================== 00:07:37.506 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:37.506 ===================================================== 00:07:37.506 Controller Capabilities/Features 00:07:37.506 ================================ 00:07:37.506 Vendor ID: 1b36 00:07:37.506 Subsystem Vendor ID: 1af4 00:07:37.506 Serial Number: 12342 00:07:37.506 Model Number: QEMU NVMe Ctrl 00:07:37.506 Firmware Version: 8.0.0 00:07:37.506 Recommended Arb Burst: 6 00:07:37.506 IEEE OUI Identifier: 00 54 52 00:07:37.506 Multi-path I/O 00:07:37.506 May have multiple subsystem ports: No 00:07:37.506 May have multiple controllers: No 00:07:37.506 Associated with SR-IOV VF: No 00:07:37.506 Max Data Transfer Size: 524288 00:07:37.506 Max Number of Namespaces: 256 00:07:37.506 Max Number of I/O Queues: 64 00:07:37.506 NVMe Specification Version (VS): 1.4 00:07:37.506 NVMe Specification Version (Identify): 1.4 00:07:37.506 Maximum Queue Entries: 2048 00:07:37.506 Contiguous Queues Required: Yes 00:07:37.506 Arbitration Mechanisms Supported 00:07:37.506 Weighted Round Robin: Not Supported 00:07:37.506 Vendor Specific: Not Supported 00:07:37.506 Reset Timeout: 7500 ms 00:07:37.506 Doorbell Stride: 4 bytes 00:07:37.506 NVM Subsystem Reset: Not Supported 00:07:37.506 Command Sets Supported 00:07:37.506 NVM Command Set: Supported 00:07:37.507 Boot Partition: Not Supported 00:07:37.507 Memory Page Size Minimum: 4096 bytes 00:07:37.507 Memory Page Size Maximum: 65536 bytes 00:07:37.507 Persistent Memory Region: Not Supported 00:07:37.507 Optional Asynchronous Events Supported 00:07:37.507 Namespace Attribute Notices: Supported 00:07:37.507 Firmware Activation Notices: Not Supported 00:07:37.507 ANA Change Notices: Not Supported 00:07:37.507 PLE Aggregate Log Change Notices: Not Supported 00:07:37.507 LBA Status Info Alert Notices: Not Supported 00:07:37.507 EGE Aggregate Log Change Notices: Not Supported 00:07:37.507 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.507 Zone Descriptor Change Notices: Not Supported 00:07:37.507 Discovery Log Change Notices: Not Supported 00:07:37.507 Controller Attributes 00:07:37.507 128-bit Host Identifier: Not Supported 00:07:37.507 Non-Operational Permissive Mode: Not Supported 00:07:37.507 NVM Sets: Not Supported 00:07:37.507 Read Recovery Levels: Not Supported 00:07:37.507 Endurance Groups: Not Supported 00:07:37.507 Predictable Latency Mode: Not Supported 00:07:37.507 Traffic Based Keep ALive: Not Supported 00:07:37.507 Namespace Granularity: Not Supported 00:07:37.507 SQ Associations: Not Supported 00:07:37.507 UUID List: Not Supported 00:07:37.507 Multi-Domain Subsystem: Not Supported 00:07:37.507 Fixed Capacity Management: Not Supported 00:07:37.507 Variable Capacity Management: Not Supported 00:07:37.507 Delete Endurance Group: Not Supported 00:07:37.507 Delete NVM Set: Not Supported 00:07:37.507 Extended LBA Formats Supported: Supported 00:07:37.507 Flexible Data Placement Supported: Not Supported 00:07:37.507 00:07:37.507 Controller Memory Buffer Support 00:07:37.507 ================================ 00:07:37.507 Supported: No 00:07:37.507 00:07:37.507 Persistent Memory Region Support 00:07:37.507 ================================ 00:07:37.507 Supported: No 00:07:37.507 00:07:37.507 Admin Command Set Attributes 00:07:37.507 ============================ 00:07:37.507 Security Send/Receive: Not Supported 00:07:37.507 Format NVM: Supported 00:07:37.507 Firmware Activate/Download: Not Supported 00:07:37.507 Namespace Management: Supported 00:07:37.507 Device Self-Test: Not Supported 00:07:37.507 Directives: Supported 00:07:37.507 NVMe-MI: Not Supported 00:07:37.507 Virtualization Management: Not Supported 00:07:37.507 Doorbell Buffer Config: Supported 00:07:37.507 Get LBA Status Capability: Not Supported 00:07:37.507 Command & Feature Lockdown Capability: Not Supported 00:07:37.507 Abort Command Limit: 4 00:07:37.507 Async Event Request Limit: 4 00:07:37.507 Number of Firmware Slots: N/A 00:07:37.507 Firmware Slot 1 Read-Only: N/A 00:07:37.507 Firmware Activation Without Reset: N/A 00:07:37.507 Multiple Update Detection Support: N/A 00:07:37.507 Firmware Update Granularity: No Information Provided 00:07:37.507 Per-Namespace SMART Log: Yes 00:07:37.507 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.507 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:37.507 Command Effects Log Page: Supported 00:07:37.507 Get Log Page Extended Data: Supported 00:07:37.507 Telemetry Log Pages: Not Supported 00:07:37.507 Persistent Event Log Pages: Not Supported 00:07:37.507 Supported Log Pages Log Page: May Support 00:07:37.507 Commands Supported & Effects Log Page: Not Supported 00:07:37.507 Feature Identifiers & Effects Log Page:May Support 00:07:37.507 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.507 Data Area 4 for Telemetry Log: Not Supported 00:07:37.507 Error Log Page Entries Supported: 1 00:07:37.507 Keep Alive: Not Supported 00:07:37.507 00:07:37.507 NVM Command Set Attributes 00:07:37.507 ========================== 00:07:37.507 Submission Queue Entry Size 00:07:37.507 Max: 64 00:07:37.507 Min: 64 00:07:37.507 Completion Queue Entry Size 00:07:37.507 Max: 16 00:07:37.507 Min: 16 00:07:37.507 Number of Namespaces: 256 00:07:37.507 Compare Command: Supported 00:07:37.507 Write Uncorrectable Command: Not Supported 00:07:37.507 Dataset Management Command: Supported 00:07:37.507 Write Zeroes Command: Supported 00:07:37.507 Set Features Save Field: Supported 00:07:37.507 Reservations: Not Supported 00:07:37.507 Timestamp: Supported 00:07:37.507 Copy: Supported 00:07:37.507 Volatile Write Cache: Present 00:07:37.507 Atomic Write Unit (Normal): 1 00:07:37.507 Atomic Write Unit (PFail): 1 00:07:37.507 Atomic Compare & Write Unit: 1 00:07:37.507 Fused Compare & Write: Not Supported 00:07:37.507 Scatter-Gather List 00:07:37.507 SGL Command Set: Supported 00:07:37.507 SGL Keyed: Not Supported 00:07:37.507 SGL Bit Bucket Descriptor: Not Supported 00:07:37.507 SGL Metadata Pointer: Not Supported 00:07:37.507 Oversized SGL: Not Supported 00:07:37.507 SGL Metadata Address: Not Supported 00:07:37.507 SGL Offset: Not Supported 00:07:37.507 Transport SGL Data Block: Not Supported 00:07:37.507 Replay Protected Memory Block: Not Supported 00:07:37.507 00:07:37.507 Firmware Slot Information 00:07:37.507 ========================= 00:07:37.507 Active slot: 1 00:07:37.507 Slot 1 Firmware Revision: 1.0 00:07:37.507 00:07:37.507 00:07:37.507 Commands Supported and Effects 00:07:37.507 ============================== 00:07:37.507 Admin Commands 00:07:37.507 -------------- 00:07:37.507 Delete I/O Submission Queue (00h): Supported 00:07:37.507 Create I/O Submission Queue (01h): Supported 00:07:37.507 Get Log Page (02h): Supported 00:07:37.507 Delete I/O Completion Queue (04h): Supported 00:07:37.507 Create I/O Completion Queue (05h): Supported 00:07:37.507 Identify (06h): Supported 00:07:37.507 Abort (08h): Supported 00:07:37.507 Set Features (09h): Supported 00:07:37.507 Get Features (0Ah): Supported 00:07:37.507 Asynchronous Event Request (0Ch): Supported 00:07:37.507 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.507 Directive Send (19h): Supported 00:07:37.507 Directive Receive (1Ah): Supported 00:07:37.507 Virtualization Management (1Ch): Supported 00:07:37.507 Doorbell Buffer Config (7Ch): Supported 00:07:37.507 Format NVM (80h): Supported LBA-Change 00:07:37.507 I/O Commands 00:07:37.507 ------------ 00:07:37.507 Flush (00h): Supported LBA-Change 00:07:37.507 Write (01h): Supported LBA-Change 00:07:37.507 Read (02h): Supported 00:07:37.507 Compare (05h): Supported 00:07:37.507 Write Zeroes (08h): Supported LBA-Change 00:07:37.507 Dataset Management (09h): Supported LBA-Change 00:07:37.507 Unknown (0Ch): Supported 00:07:37.507 Unknown (12h): Supported 00:07:37.507 Copy (19h): Supported LBA-Change 00:07:37.507 Unknown (1Dh): Supported LBA-Change 00:07:37.507 00:07:37.507 Error Log 00:07:37.507 ========= 00:07:37.507 00:07:37.507 Arbitration 00:07:37.507 =========== 00:07:37.507 Arbitration Burst: no limit 00:07:37.507 00:07:37.507 Power Management 00:07:37.507 ================ 00:07:37.507 Number of Power States: 1 00:07:37.507 Current Power State: Power State #0 00:07:37.507 Power State #0: 00:07:37.507 Max Power: 25.00 W 00:07:37.507 Non-Operational State: Operational 00:07:37.507 Entry Latency: 16 microseconds 00:07:37.507 Exit Latency: 4 microseconds 00:07:37.507 Relative Read Throughput: 0 00:07:37.507 Relative Read Latency: 0 00:07:37.507 Relative Write Throughput: 0 00:07:37.507 Relative Write Latency: 0 00:07:37.507 Idle Power: Not Reported 00:07:37.507 Active Power: Not Reported 00:07:37.507 Non-Operational Permissive Mode: Not Supported 00:07:37.507 00:07:37.507 Health Information 00:07:37.507 ================== 00:07:37.507 Critical Warnings: 00:07:37.507 Available Spare Space: OK 00:07:37.507 Temperature: OK 00:07:37.507 Device Reliability: OK 00:07:37.507 Read Only: No 00:07:37.507 Volatile Memory Backup: OK 00:07:37.507 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.507 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.508 Available Spare: 0% 00:07:37.508 Available Spare Threshold: 0% 00:07:37.508 Life Percentage Used: 0% 00:07:37.508 Data Units Read: 2213 00:07:37.508 Data Units Written: 2001 00:07:37.508 Host Read Commands: 117433 00:07:37.508 Host Write Commands: 115702 00:07:37.508 Controller Busy Time: 0 minutes 00:07:37.508 Power Cycles: 0 00:07:37.508 Power On Hours: 0 hours 00:07:37.508 Unsafe Shutdowns: 0 00:07:37.508 Unrecoverable Media Errors: 0 00:07:37.508 Lifetime Error Log Entries: 0 00:07:37.508 Warning Temperature Time: 0 minutes 00:07:37.508 Critical Temperature Time: 0 minutes 00:07:37.508 00:07:37.508 Number of Queues 00:07:37.508 ================ 00:07:37.508 Number of I/O Submission Queues: 64 00:07:37.508 Number of I/O Completion Queues: 64 00:07:37.508 00:07:37.508 ZNS Specific Controller Data 00:07:37.508 ============================ 00:07:37.508 Zone Append Size Limit: 0 00:07:37.508 00:07:37.508 00:07:37.508 Active Namespaces 00:07:37.508 ================= 00:07:37.508 Namespace ID:1 00:07:37.508 Error Recovery Timeout: Unlimited 00:07:37.508 Command Set Identifier: NVM (00h) 00:07:37.508 Deallocate: Supported 00:07:37.508 Deallocated/Unwritten Error: Supported 00:07:37.508 Deallocated Read Value: All 0x00 00:07:37.508 Deallocate in Write Zeroes: Not Supported 00:07:37.508 Deallocated Guard Field: 0xFFFF 00:07:37.508 Flush: Supported 00:07:37.508 Reservation: Not Supported 00:07:37.508 Namespace Sharing Capabilities: Private 00:07:37.508 Size (in LBAs): 1048576 (4GiB) 00:07:37.508 Capacity (in LBAs): 1048576 (4GiB) 00:07:37.508 Utilization (in LBAs): 1048576 (4GiB) 00:07:37.508 Thin Provisioning: Not Supported 00:07:37.508 Per-NS Atomic Units: No 00:07:37.508 Maximum Single Source Range Length: 128 00:07:37.508 Maximum Copy Length: 128 00:07:37.508 Maximum Source Range Count: 128 00:07:37.508 NGUID/EUI64 Never Reused: No 00:07:37.508 Namespace Write Protected: No 00:07:37.508 Number of LBA Formats: 8 00:07:37.508 Current LBA Format: LBA Format #04 00:07:37.508 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.508 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.508 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.508 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.508 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.508 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.508 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.508 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.508 00:07:37.508 NVM Specific Namespace Data 00:07:37.508 =========================== 00:07:37.508 Logical Block Storage Tag Mask: 0 00:07:37.508 Protection Information Capabilities: 00:07:37.508 16b Guard Protection Information Storage Tag Support: No 00:07:37.508 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.508 Storage Tag Check Read Support: No 00:07:37.508 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Namespace ID:2 00:07:37.508 Error Recovery Timeout: Unlimited 00:07:37.508 Command Set Identifier: NVM (00h) 00:07:37.508 Deallocate: Supported 00:07:37.508 Deallocated/Unwritten Error: Supported 00:07:37.508 Deallocated Read Value: All 0x00 00:07:37.508 Deallocate in Write Zeroes: Not Supported 00:07:37.508 Deallocated Guard Field: 0xFFFF 00:07:37.508 Flush: Supported 00:07:37.508 Reservation: Not Supported 00:07:37.508 Namespace Sharing Capabilities: Private 00:07:37.508 Size (in LBAs): 1048576 (4GiB) 00:07:37.508 Capacity (in LBAs): 1048576 (4GiB) 00:07:37.508 Utilization (in LBAs): 1048576 (4GiB) 00:07:37.508 Thin Provisioning: Not Supported 00:07:37.508 Per-NS Atomic Units: No 00:07:37.508 Maximum Single Source Range Length: 128 00:07:37.508 Maximum Copy Length: 128 00:07:37.508 Maximum Source Range Count: 128 00:07:37.508 NGUID/EUI64 Never Reused: No 00:07:37.508 Namespace Write Protected: No 00:07:37.508 Number of LBA Formats: 8 00:07:37.508 Current LBA Format: LBA Format #04 00:07:37.508 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.508 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.508 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.508 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.508 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.508 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.508 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.508 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.508 00:07:37.508 NVM Specific Namespace Data 00:07:37.508 =========================== 00:07:37.508 Logical Block Storage Tag Mask: 0 00:07:37.508 Protection Information Capabilities: 00:07:37.508 16b Guard Protection Information Storage Tag Support: No 00:07:37.508 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.508 Storage Tag Check Read Support: No 00:07:37.508 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Namespace ID:3 00:07:37.508 Error Recovery Timeout: Unlimited 00:07:37.508 Command Set Identifier: NVM (00h) 00:07:37.508 Deallocate: Supported 00:07:37.508 Deallocated/Unwritten Error: Supported 00:07:37.508 Deallocated Read Value: All 0x00 00:07:37.508 Deallocate in Write Zeroes: Not Supported 00:07:37.508 Deallocated Guard Field: 0xFFFF 00:07:37.508 Flush: Supported 00:07:37.508 Reservation: Not Supported 00:07:37.508 Namespace Sharing Capabilities: Private 00:07:37.508 Size (in LBAs): 1048576 (4GiB) 00:07:37.508 Capacity (in LBAs): 1048576 (4GiB) 00:07:37.508 Utilization (in LBAs): 1048576 (4GiB) 00:07:37.508 Thin Provisioning: Not Supported 00:07:37.508 Per-NS Atomic Units: No 00:07:37.508 Maximum Single Source Range Length: 128 00:07:37.508 Maximum Copy Length: 128 00:07:37.508 Maximum Source Range Count: 128 00:07:37.508 NGUID/EUI64 Never Reused: No 00:07:37.508 Namespace Write Protected: No 00:07:37.508 Number of LBA Formats: 8 00:07:37.508 Current LBA Format: LBA Format #04 00:07:37.508 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.508 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.508 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.508 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.508 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.508 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.508 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.508 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.508 00:07:37.508 NVM Specific Namespace Data 00:07:37.508 =========================== 00:07:37.508 Logical Block Storage Tag Mask: 0 00:07:37.508 Protection Information Capabilities: 00:07:37.508 16b Guard Protection Information Storage Tag Support: No 00:07:37.508 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.508 Storage Tag Check Read Support: No 00:07:37.508 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.508 14:44:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:37.508 14:44:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:37.767 ===================================================== 00:07:37.767 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:37.767 ===================================================== 00:07:37.767 Controller Capabilities/Features 00:07:37.767 ================================ 00:07:37.767 Vendor ID: 1b36 00:07:37.767 Subsystem Vendor ID: 1af4 00:07:37.767 Serial Number: 12343 00:07:37.767 Model Number: QEMU NVMe Ctrl 00:07:37.767 Firmware Version: 8.0.0 00:07:37.767 Recommended Arb Burst: 6 00:07:37.767 IEEE OUI Identifier: 00 54 52 00:07:37.767 Multi-path I/O 00:07:37.767 May have multiple subsystem ports: No 00:07:37.767 May have multiple controllers: Yes 00:07:37.768 Associated with SR-IOV VF: No 00:07:37.768 Max Data Transfer Size: 524288 00:07:37.768 Max Number of Namespaces: 256 00:07:37.768 Max Number of I/O Queues: 64 00:07:37.768 NVMe Specification Version (VS): 1.4 00:07:37.768 NVMe Specification Version (Identify): 1.4 00:07:37.768 Maximum Queue Entries: 2048 00:07:37.768 Contiguous Queues Required: Yes 00:07:37.768 Arbitration Mechanisms Supported 00:07:37.768 Weighted Round Robin: Not Supported 00:07:37.768 Vendor Specific: Not Supported 00:07:37.768 Reset Timeout: 7500 ms 00:07:37.768 Doorbell Stride: 4 bytes 00:07:37.768 NVM Subsystem Reset: Not Supported 00:07:37.768 Command Sets Supported 00:07:37.768 NVM Command Set: Supported 00:07:37.768 Boot Partition: Not Supported 00:07:37.768 Memory Page Size Minimum: 4096 bytes 00:07:37.768 Memory Page Size Maximum: 65536 bytes 00:07:37.768 Persistent Memory Region: Not Supported 00:07:37.768 Optional Asynchronous Events Supported 00:07:37.768 Namespace Attribute Notices: Supported 00:07:37.768 Firmware Activation Notices: Not Supported 00:07:37.768 ANA Change Notices: Not Supported 00:07:37.768 PLE Aggregate Log Change Notices: Not Supported 00:07:37.768 LBA Status Info Alert Notices: Not Supported 00:07:37.768 EGE Aggregate Log Change Notices: Not Supported 00:07:37.768 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.768 Zone Descriptor Change Notices: Not Supported 00:07:37.768 Discovery Log Change Notices: Not Supported 00:07:37.768 Controller Attributes 00:07:37.768 128-bit Host Identifier: Not Supported 00:07:37.768 Non-Operational Permissive Mode: Not Supported 00:07:37.768 NVM Sets: Not Supported 00:07:37.768 Read Recovery Levels: Not Supported 00:07:37.768 Endurance Groups: Supported 00:07:37.768 Predictable Latency Mode: Not Supported 00:07:37.768 Traffic Based Keep ALive: Not Supported 00:07:37.768 Namespace Granularity: Not Supported 00:07:37.768 SQ Associations: Not Supported 00:07:37.768 UUID List: Not Supported 00:07:37.768 Multi-Domain Subsystem: Not Supported 00:07:37.768 Fixed Capacity Management: Not Supported 00:07:37.768 Variable Capacity Management: Not Supported 00:07:37.768 Delete Endurance Group: Not Supported 00:07:37.768 Delete NVM Set: Not Supported 00:07:37.768 Extended LBA Formats Supported: Supported 00:07:37.768 Flexible Data Placement Supported: Supported 00:07:37.768 00:07:37.768 Controller Memory Buffer Support 00:07:37.768 ================================ 00:07:37.768 Supported: No 00:07:37.768 00:07:37.768 Persistent Memory Region Support 00:07:37.768 ================================ 00:07:37.768 Supported: No 00:07:37.768 00:07:37.768 Admin Command Set Attributes 00:07:37.768 ============================ 00:07:37.768 Security Send/Receive: Not Supported 00:07:37.768 Format NVM: Supported 00:07:37.768 Firmware Activate/Download: Not Supported 00:07:37.768 Namespace Management: Supported 00:07:37.768 Device Self-Test: Not Supported 00:07:37.768 Directives: Supported 00:07:37.768 NVMe-MI: Not Supported 00:07:37.768 Virtualization Management: Not Supported 00:07:37.768 Doorbell Buffer Config: Supported 00:07:37.768 Get LBA Status Capability: Not Supported 00:07:37.768 Command & Feature Lockdown Capability: Not Supported 00:07:37.768 Abort Command Limit: 4 00:07:37.768 Async Event Request Limit: 4 00:07:37.768 Number of Firmware Slots: N/A 00:07:37.768 Firmware Slot 1 Read-Only: N/A 00:07:37.768 Firmware Activation Without Reset: N/A 00:07:37.768 Multiple Update Detection Support: N/A 00:07:37.768 Firmware Update Granularity: No Information Provided 00:07:37.768 Per-Namespace SMART Log: Yes 00:07:37.768 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.768 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:37.768 Command Effects Log Page: Supported 00:07:37.768 Get Log Page Extended Data: Supported 00:07:37.768 Telemetry Log Pages: Not Supported 00:07:37.768 Persistent Event Log Pages: Not Supported 00:07:37.768 Supported Log Pages Log Page: May Support 00:07:37.768 Commands Supported & Effects Log Page: Not Supported 00:07:37.768 Feature Identifiers & Effects Log Page:May Support 00:07:37.768 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.768 Data Area 4 for Telemetry Log: Not Supported 00:07:37.768 Error Log Page Entries Supported: 1 00:07:37.768 Keep Alive: Not Supported 00:07:37.768 00:07:37.768 NVM Command Set Attributes 00:07:37.768 ========================== 00:07:37.768 Submission Queue Entry Size 00:07:37.768 Max: 64 00:07:37.768 Min: 64 00:07:37.768 Completion Queue Entry Size 00:07:37.768 Max: 16 00:07:37.768 Min: 16 00:07:37.768 Number of Namespaces: 256 00:07:37.768 Compare Command: Supported 00:07:37.768 Write Uncorrectable Command: Not Supported 00:07:37.768 Dataset Management Command: Supported 00:07:37.768 Write Zeroes Command: Supported 00:07:37.768 Set Features Save Field: Supported 00:07:37.768 Reservations: Not Supported 00:07:37.768 Timestamp: Supported 00:07:37.768 Copy: Supported 00:07:37.768 Volatile Write Cache: Present 00:07:37.768 Atomic Write Unit (Normal): 1 00:07:37.768 Atomic Write Unit (PFail): 1 00:07:37.768 Atomic Compare & Write Unit: 1 00:07:37.768 Fused Compare & Write: Not Supported 00:07:37.768 Scatter-Gather List 00:07:37.768 SGL Command Set: Supported 00:07:37.768 SGL Keyed: Not Supported 00:07:37.768 SGL Bit Bucket Descriptor: Not Supported 00:07:37.768 SGL Metadata Pointer: Not Supported 00:07:37.768 Oversized SGL: Not Supported 00:07:37.768 SGL Metadata Address: Not Supported 00:07:37.768 SGL Offset: Not Supported 00:07:37.768 Transport SGL Data Block: Not Supported 00:07:37.768 Replay Protected Memory Block: Not Supported 00:07:37.768 00:07:37.768 Firmware Slot Information 00:07:37.768 ========================= 00:07:37.768 Active slot: 1 00:07:37.768 Slot 1 Firmware Revision: 1.0 00:07:37.768 00:07:37.768 00:07:37.768 Commands Supported and Effects 00:07:37.768 ============================== 00:07:37.768 Admin Commands 00:07:37.768 -------------- 00:07:37.768 Delete I/O Submission Queue (00h): Supported 00:07:37.768 Create I/O Submission Queue (01h): Supported 00:07:37.768 Get Log Page (02h): Supported 00:07:37.768 Delete I/O Completion Queue (04h): Supported 00:07:37.768 Create I/O Completion Queue (05h): Supported 00:07:37.768 Identify (06h): Supported 00:07:37.768 Abort (08h): Supported 00:07:37.768 Set Features (09h): Supported 00:07:37.768 Get Features (0Ah): Supported 00:07:37.768 Asynchronous Event Request (0Ch): Supported 00:07:37.768 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.768 Directive Send (19h): Supported 00:07:37.768 Directive Receive (1Ah): Supported 00:07:37.768 Virtualization Management (1Ch): Supported 00:07:37.768 Doorbell Buffer Config (7Ch): Supported 00:07:37.768 Format NVM (80h): Supported LBA-Change 00:07:37.768 I/O Commands 00:07:37.768 ------------ 00:07:37.768 Flush (00h): Supported LBA-Change 00:07:37.768 Write (01h): Supported LBA-Change 00:07:37.768 Read (02h): Supported 00:07:37.768 Compare (05h): Supported 00:07:37.768 Write Zeroes (08h): Supported LBA-Change 00:07:37.768 Dataset Management (09h): Supported LBA-Change 00:07:37.768 Unknown (0Ch): Supported 00:07:37.768 Unknown (12h): Supported 00:07:37.768 Copy (19h): Supported LBA-Change 00:07:37.768 Unknown (1Dh): Supported LBA-Change 00:07:37.768 00:07:37.768 Error Log 00:07:37.768 ========= 00:07:37.768 00:07:37.768 Arbitration 00:07:37.768 =========== 00:07:37.768 Arbitration Burst: no limit 00:07:37.768 00:07:37.768 Power Management 00:07:37.768 ================ 00:07:37.768 Number of Power States: 1 00:07:37.768 Current Power State: Power State #0 00:07:37.768 Power State #0: 00:07:37.768 Max Power: 25.00 W 00:07:37.768 Non-Operational State: Operational 00:07:37.768 Entry Latency: 16 microseconds 00:07:37.768 Exit Latency: 4 microseconds 00:07:37.768 Relative Read Throughput: 0 00:07:37.768 Relative Read Latency: 0 00:07:37.768 Relative Write Throughput: 0 00:07:37.768 Relative Write Latency: 0 00:07:37.768 Idle Power: Not Reported 00:07:37.768 Active Power: Not Reported 00:07:37.768 Non-Operational Permissive Mode: Not Supported 00:07:37.768 00:07:37.768 Health Information 00:07:37.768 ================== 00:07:37.768 Critical Warnings: 00:07:37.768 Available Spare Space: OK 00:07:37.768 Temperature: OK 00:07:37.768 Device Reliability: OK 00:07:37.768 Read Only: No 00:07:37.768 Volatile Memory Backup: OK 00:07:37.768 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.768 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.768 Available Spare: 0% 00:07:37.768 Available Spare Threshold: 0% 00:07:37.768 Life Percentage Used: 0% 00:07:37.768 Data Units Read: 801 00:07:37.768 Data Units Written: 730 00:07:37.768 Host Read Commands: 39755 00:07:37.768 Host Write Commands: 39178 00:07:37.768 Controller Busy Time: 0 minutes 00:07:37.768 Power Cycles: 0 00:07:37.768 Power On Hours: 0 hours 00:07:37.768 Unsafe Shutdowns: 0 00:07:37.768 Unrecoverable Media Errors: 0 00:07:37.768 Lifetime Error Log Entries: 0 00:07:37.768 Warning Temperature Time: 0 minutes 00:07:37.768 Critical Temperature Time: 0 minutes 00:07:37.768 00:07:37.768 Number of Queues 00:07:37.769 ================ 00:07:37.769 Number of I/O Submission Queues: 64 00:07:37.769 Number of I/O Completion Queues: 64 00:07:37.769 00:07:37.769 ZNS Specific Controller Data 00:07:37.769 ============================ 00:07:37.769 Zone Append Size Limit: 0 00:07:37.769 00:07:37.769 00:07:37.769 Active Namespaces 00:07:37.769 ================= 00:07:37.769 Namespace ID:1 00:07:37.769 Error Recovery Timeout: Unlimited 00:07:37.769 Command Set Identifier: NVM (00h) 00:07:37.769 Deallocate: Supported 00:07:37.769 Deallocated/Unwritten Error: Supported 00:07:37.769 Deallocated Read Value: All 0x00 00:07:37.769 Deallocate in Write Zeroes: Not Supported 00:07:37.769 Deallocated Guard Field: 0xFFFF 00:07:37.769 Flush: Supported 00:07:37.769 Reservation: Not Supported 00:07:37.769 Namespace Sharing Capabilities: Multiple Controllers 00:07:37.769 Size (in LBAs): 262144 (1GiB) 00:07:37.769 Capacity (in LBAs): 262144 (1GiB) 00:07:37.769 Utilization (in LBAs): 262144 (1GiB) 00:07:37.769 Thin Provisioning: Not Supported 00:07:37.769 Per-NS Atomic Units: No 00:07:37.769 Maximum Single Source Range Length: 128 00:07:37.769 Maximum Copy Length: 128 00:07:37.769 Maximum Source Range Count: 128 00:07:37.769 NGUID/EUI64 Never Reused: No 00:07:37.769 Namespace Write Protected: No 00:07:37.769 Endurance group ID: 1 00:07:37.769 Number of LBA Formats: 8 00:07:37.769 Current LBA Format: LBA Format #04 00:07:37.769 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.769 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.769 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.769 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.769 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.769 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.769 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.769 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.769 00:07:37.769 Get Feature FDP: 00:07:37.769 ================ 00:07:37.769 Enabled: Yes 00:07:37.769 FDP configuration index: 0 00:07:37.769 00:07:37.769 FDP configurations log page 00:07:37.769 =========================== 00:07:37.769 Number of FDP configurations: 1 00:07:37.769 Version: 0 00:07:37.769 Size: 112 00:07:37.769 FDP Configuration Descriptor: 0 00:07:37.769 Descriptor Size: 96 00:07:37.769 Reclaim Group Identifier format: 2 00:07:37.769 FDP Volatile Write Cache: Not Present 00:07:37.769 FDP Configuration: Valid 00:07:37.769 Vendor Specific Size: 0 00:07:37.769 Number of Reclaim Groups: 2 00:07:37.769 Number of Recalim Unit Handles: 8 00:07:37.769 Max Placement Identifiers: 128 00:07:37.769 Number of Namespaces Suppprted: 256 00:07:37.769 Reclaim unit Nominal Size: 6000000 bytes 00:07:37.769 Estimated Reclaim Unit Time Limit: Not Reported 00:07:37.769 RUH Desc #000: RUH Type: Initially Isolated 00:07:37.769 RUH Desc #001: RUH Type: Initially Isolated 00:07:37.769 RUH Desc #002: RUH Type: Initially Isolated 00:07:37.769 RUH Desc #003: RUH Type: Initially Isolated 00:07:37.769 RUH Desc #004: RUH Type: Initially Isolated 00:07:37.769 RUH Desc #005: RUH Type: Initially Isolated 00:07:37.769 RUH Desc #006: RUH Type: Initially Isolated 00:07:37.769 RUH Desc #007: RUH Type: Initially Isolated 00:07:37.769 00:07:37.769 FDP reclaim unit handle usage log page 00:07:37.769 ====================================== 00:07:37.769 Number of Reclaim Unit Handles: 8 00:07:37.769 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:37.769 RUH Usage Desc #001: RUH Attributes: Unused 00:07:37.769 RUH Usage Desc #002: RUH Attributes: Unused 00:07:37.769 RUH Usage Desc #003: RUH Attributes: Unused 00:07:37.769 RUH Usage Desc #004: RUH Attributes: Unused 00:07:37.769 RUH Usage Desc #005: RUH Attributes: Unused 00:07:37.769 RUH Usage Desc #006: RUH Attributes: Unused 00:07:37.769 RUH Usage Desc #007: RUH Attributes: Unused 00:07:37.769 00:07:37.769 FDP statistics log page 00:07:37.769 ======================= 00:07:37.769 Host bytes with metadata written: 472158208 00:07:37.769 Media bytes with metadata written: 472215552 00:07:37.769 Media bytes erased: 0 00:07:37.769 00:07:37.769 FDP events log page 00:07:37.769 =================== 00:07:37.769 Number of FDP events: 0 00:07:37.769 00:07:37.769 NVM Specific Namespace Data 00:07:37.769 =========================== 00:07:37.769 Logical Block Storage Tag Mask: 0 00:07:37.769 Protection Information Capabilities: 00:07:37.769 16b Guard Protection Information Storage Tag Support: No 00:07:37.769 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.769 Storage Tag Check Read Support: No 00:07:37.769 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.769 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.769 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.769 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.769 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.769 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.769 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.769 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.769 00:07:37.769 real 0m1.250s 00:07:37.769 user 0m0.439s 00:07:37.769 sys 0m0.576s 00:07:37.769 14:44:23 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.769 14:44:23 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:37.769 ************************************ 00:07:37.769 END TEST nvme_identify 00:07:37.769 ************************************ 00:07:37.769 14:44:23 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:37.769 14:44:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.769 14:44:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.769 14:44:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.769 ************************************ 00:07:37.769 START TEST nvme_perf 00:07:37.769 ************************************ 00:07:37.769 14:44:23 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:37.769 14:44:23 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:39.147 Initializing NVMe Controllers 00:07:39.147 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:39.147 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:39.147 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:39.147 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:39.147 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:39.147 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:39.147 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:39.147 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:39.147 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:39.147 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:39.147 Initialization complete. Launching workers. 00:07:39.147 ======================================================== 00:07:39.147 Latency(us) 00:07:39.147 Device Information : IOPS MiB/s Average min max 00:07:39.147 PCIE (0000:00:11.0) NSID 1 from core 0: 10094.81 118.30 12699.03 5495.17 40128.33 00:07:39.147 PCIE (0000:00:13.0) NSID 1 from core 0: 10094.81 118.30 12682.12 5446.41 39447.56 00:07:39.147 PCIE (0000:00:10.0) NSID 1 from core 0: 10094.81 118.30 12661.45 5378.95 37972.76 00:07:39.147 PCIE (0000:00:12.0) NSID 1 from core 0: 10094.81 118.30 12642.01 5363.17 36201.01 00:07:39.147 PCIE (0000:00:12.0) NSID 2 from core 0: 10094.81 118.30 12621.70 5452.28 34851.15 00:07:39.147 PCIE (0000:00:12.0) NSID 3 from core 0: 10158.70 119.05 12521.83 5488.10 28115.54 00:07:39.147 ======================================================== 00:07:39.147 Total : 60632.74 710.54 12637.90 5363.17 40128.33 00:07:39.147 00:07:39.147 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:39.147 ================================================================================= 00:07:39.147 1.00000% : 5696.591us 00:07:39.147 10.00000% : 6251.126us 00:07:39.147 25.00000% : 8872.566us 00:07:39.147 50.00000% : 13006.375us 00:07:39.147 75.00000% : 16131.938us 00:07:39.147 90.00000% : 18047.606us 00:07:39.147 95.00000% : 18753.378us 00:07:39.147 98.00000% : 19761.625us 00:07:39.147 99.00000% : 29844.086us 00:07:39.147 99.50000% : 39119.951us 00:07:39.147 99.90000% : 39926.548us 00:07:39.147 99.99000% : 40128.197us 00:07:39.147 99.99900% : 40329.846us 00:07:39.147 99.99990% : 40329.846us 00:07:39.147 99.99999% : 40329.846us 00:07:39.147 00:07:39.147 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:39.147 ================================================================================= 00:07:39.147 1.00000% : 5696.591us 00:07:39.147 10.00000% : 6251.126us 00:07:39.147 25.00000% : 8620.505us 00:07:39.147 50.00000% : 12905.551us 00:07:39.147 75.00000% : 16131.938us 00:07:39.147 90.00000% : 18148.431us 00:07:39.147 95.00000% : 18854.203us 00:07:39.147 98.00000% : 19660.800us 00:07:39.147 99.00000% : 28634.191us 00:07:39.147 99.50000% : 38313.354us 00:07:39.147 99.90000% : 39321.600us 00:07:39.147 99.99000% : 39523.249us 00:07:39.147 99.99900% : 39523.249us 00:07:39.147 99.99990% : 39523.249us 00:07:39.147 99.99999% : 39523.249us 00:07:39.147 00:07:39.147 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:39.147 ================================================================================= 00:07:39.147 1.00000% : 5646.178us 00:07:39.147 10.00000% : 6251.126us 00:07:39.147 25.00000% : 8670.917us 00:07:39.147 50.00000% : 12855.138us 00:07:39.147 75.00000% : 16232.763us 00:07:39.147 90.00000% : 17946.782us 00:07:39.147 95.00000% : 18551.729us 00:07:39.147 98.00000% : 19358.326us 00:07:39.147 99.00000% : 27625.945us 00:07:39.147 99.50000% : 36700.160us 00:07:39.147 99.90000% : 37910.055us 00:07:39.147 99.99000% : 38111.705us 00:07:39.147 99.99900% : 38111.705us 00:07:39.147 99.99990% : 38111.705us 00:07:39.147 99.99999% : 38111.705us 00:07:39.147 00:07:39.147 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:39.147 ================================================================================= 00:07:39.147 1.00000% : 5646.178us 00:07:39.147 10.00000% : 6251.126us 00:07:39.147 25.00000% : 8771.742us 00:07:39.147 50.00000% : 12804.726us 00:07:39.147 75.00000% : 16232.763us 00:07:39.147 90.00000% : 18148.431us 00:07:39.147 95.00000% : 18753.378us 00:07:39.147 98.00000% : 19358.326us 00:07:39.147 99.00000% : 26617.698us 00:07:39.147 99.50000% : 35086.966us 00:07:39.147 99.90000% : 36095.212us 00:07:39.147 99.99000% : 36296.862us 00:07:39.147 99.99900% : 36296.862us 00:07:39.147 99.99990% : 36296.862us 00:07:39.147 99.99999% : 36296.862us 00:07:39.147 00:07:39.147 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:39.147 ================================================================================= 00:07:39.147 1.00000% : 5696.591us 00:07:39.147 10.00000% : 6251.126us 00:07:39.147 25.00000% : 8872.566us 00:07:39.147 50.00000% : 12905.551us 00:07:39.147 75.00000% : 16131.938us 00:07:39.147 90.00000% : 18148.431us 00:07:39.147 95.00000% : 18955.028us 00:07:39.147 98.00000% : 19559.975us 00:07:39.147 99.00000% : 27625.945us 00:07:39.147 99.50000% : 33675.422us 00:07:39.147 99.90000% : 34683.668us 00:07:39.147 99.99000% : 34885.317us 00:07:39.147 99.99900% : 34885.317us 00:07:39.147 99.99990% : 34885.317us 00:07:39.147 99.99999% : 34885.317us 00:07:39.147 00:07:39.147 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:39.147 ================================================================================= 00:07:39.147 1.00000% : 5696.591us 00:07:39.147 10.00000% : 6251.126us 00:07:39.147 25.00000% : 8973.391us 00:07:39.147 50.00000% : 13006.375us 00:07:39.147 75.00000% : 16031.114us 00:07:39.147 90.00000% : 18047.606us 00:07:39.147 95.00000% : 18652.554us 00:07:39.147 98.00000% : 19459.151us 00:07:39.147 99.00000% : 19963.274us 00:07:39.147 99.50000% : 27020.997us 00:07:39.147 99.90000% : 28029.243us 00:07:39.147 99.99000% : 28230.892us 00:07:39.147 99.99900% : 28230.892us 00:07:39.147 99.99990% : 28230.892us 00:07:39.147 99.99999% : 28230.892us 00:07:39.147 00:07:39.147 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:39.147 ============================================================================== 00:07:39.147 Range in us Cumulative IO count 00:07:39.147 5494.942 - 5520.148: 0.0692% ( 7) 00:07:39.147 5520.148 - 5545.354: 0.1681% ( 10) 00:07:39.147 5545.354 - 5570.560: 0.2670% ( 10) 00:07:39.147 5570.560 - 5595.766: 0.3659% ( 10) 00:07:39.147 5595.766 - 5620.972: 0.4945% ( 13) 00:07:39.147 5620.972 - 5646.178: 0.6626% ( 17) 00:07:39.147 5646.178 - 5671.385: 0.8406% ( 18) 00:07:39.147 5671.385 - 5696.591: 1.0384% ( 20) 00:07:39.147 5696.591 - 5721.797: 1.3252% ( 29) 00:07:39.147 5721.797 - 5747.003: 1.5724% ( 25) 00:07:39.147 5747.003 - 5772.209: 1.8888% ( 32) 00:07:39.147 5772.209 - 5797.415: 2.1855% ( 30) 00:07:39.147 5797.415 - 5822.622: 2.5316% ( 35) 00:07:39.147 5822.622 - 5847.828: 2.9668% ( 44) 00:07:39.147 5847.828 - 5873.034: 3.3821% ( 42) 00:07:39.147 5873.034 - 5898.240: 3.7282% ( 35) 00:07:39.147 5898.240 - 5923.446: 4.1930% ( 47) 00:07:39.147 5923.446 - 5948.652: 4.6479% ( 46) 00:07:39.147 5948.652 - 5973.858: 5.0336% ( 39) 00:07:39.147 5973.858 - 5999.065: 5.4490% ( 42) 00:07:39.147 5999.065 - 6024.271: 5.9830% ( 54) 00:07:39.147 6024.271 - 6049.477: 6.4181% ( 44) 00:07:39.147 6049.477 - 6074.683: 6.8829% ( 47) 00:07:39.147 6074.683 - 6099.889: 7.3873% ( 51) 00:07:39.147 6099.889 - 6125.095: 7.8422% ( 46) 00:07:39.147 6125.095 - 6150.302: 8.3366% ( 50) 00:07:39.147 6150.302 - 6175.508: 8.8509% ( 52) 00:07:39.147 6175.508 - 6200.714: 9.3354% ( 49) 00:07:39.147 6200.714 - 6225.920: 9.8101% ( 48) 00:07:39.147 6225.920 - 6251.126: 10.2453% ( 44) 00:07:39.147 6251.126 - 6276.332: 10.6804% ( 44) 00:07:39.147 6276.332 - 6301.538: 11.1254% ( 45) 00:07:39.147 6301.538 - 6326.745: 11.5704% ( 45) 00:07:39.147 6326.745 - 6351.951: 11.9462% ( 38) 00:07:39.147 6351.951 - 6377.157: 12.2923% ( 35) 00:07:39.147 6377.157 - 6402.363: 12.6582% ( 37) 00:07:39.147 6402.363 - 6427.569: 13.0044% ( 35) 00:07:39.147 6427.569 - 6452.775: 13.3307% ( 33) 00:07:39.147 6452.775 - 6503.188: 13.9241% ( 60) 00:07:39.147 6503.188 - 6553.600: 14.4581% ( 54) 00:07:39.147 6553.600 - 6604.012: 14.9723% ( 52) 00:07:39.147 6604.012 - 6654.425: 15.4668% ( 50) 00:07:39.147 6654.425 - 6704.837: 15.7832% ( 32) 00:07:39.147 6704.837 - 6755.249: 16.0305% ( 25) 00:07:39.147 6755.249 - 6805.662: 16.2579% ( 23) 00:07:39.147 6805.662 - 6856.074: 16.4557% ( 20) 00:07:39.147 6856.074 - 6906.486: 16.6733% ( 22) 00:07:39.147 6906.486 - 6956.898: 16.9007% ( 23) 00:07:39.147 6956.898 - 7007.311: 17.0886% ( 19) 00:07:39.147 7007.311 - 7057.723: 17.3062% ( 22) 00:07:39.147 7057.723 - 7108.135: 17.4941% ( 19) 00:07:39.148 7108.135 - 7158.548: 17.6622% ( 17) 00:07:39.148 7158.548 - 7208.960: 17.8402% ( 18) 00:07:39.148 7208.960 - 7259.372: 18.0676% ( 23) 00:07:39.148 7259.372 - 7309.785: 18.3347% ( 27) 00:07:39.148 7309.785 - 7360.197: 18.6610% ( 33) 00:07:39.148 7360.197 - 7410.609: 18.8983% ( 24) 00:07:39.148 7410.609 - 7461.022: 19.1653% ( 27) 00:07:39.148 7461.022 - 7511.434: 19.4719% ( 31) 00:07:39.148 7511.434 - 7561.846: 19.7093% ( 24) 00:07:39.148 7561.846 - 7612.258: 19.9367% ( 23) 00:07:39.148 7612.258 - 7662.671: 20.1938% ( 26) 00:07:39.148 7662.671 - 7713.083: 20.4213% ( 23) 00:07:39.148 7713.083 - 7763.495: 20.6586% ( 24) 00:07:39.148 7763.495 - 7813.908: 20.8465% ( 19) 00:07:39.148 7813.908 - 7864.320: 21.0740% ( 23) 00:07:39.148 7864.320 - 7914.732: 21.2915% ( 22) 00:07:39.148 7914.732 - 7965.145: 21.4893% ( 20) 00:07:39.148 7965.145 - 8015.557: 21.6970% ( 21) 00:07:39.148 8015.557 - 8065.969: 21.9937% ( 30) 00:07:39.148 8065.969 - 8116.382: 22.2112% ( 22) 00:07:39.148 8116.382 - 8166.794: 22.3794% ( 17) 00:07:39.148 8166.794 - 8217.206: 22.5178% ( 14) 00:07:39.148 8217.206 - 8267.618: 22.7057% ( 19) 00:07:39.148 8267.618 - 8318.031: 22.8441% ( 14) 00:07:39.148 8318.031 - 8368.443: 23.0123% ( 17) 00:07:39.148 8368.443 - 8418.855: 23.2397% ( 23) 00:07:39.148 8418.855 - 8469.268: 23.4375% ( 20) 00:07:39.148 8469.268 - 8519.680: 23.6650% ( 23) 00:07:39.148 8519.680 - 8570.092: 23.8924% ( 23) 00:07:39.148 8570.092 - 8620.505: 24.0704% ( 18) 00:07:39.148 8620.505 - 8670.917: 24.2682% ( 20) 00:07:39.148 8670.917 - 8721.329: 24.4858% ( 22) 00:07:39.148 8721.329 - 8771.742: 24.6835% ( 20) 00:07:39.148 8771.742 - 8822.154: 24.9011% ( 22) 00:07:39.148 8822.154 - 8872.566: 25.1483% ( 25) 00:07:39.148 8872.566 - 8922.978: 25.4252% ( 28) 00:07:39.148 8922.978 - 8973.391: 25.7318% ( 31) 00:07:39.148 8973.391 - 9023.803: 26.0581% ( 33) 00:07:39.148 9023.803 - 9074.215: 26.3845% ( 33) 00:07:39.148 9074.215 - 9124.628: 26.7603% ( 38) 00:07:39.148 9124.628 - 9175.040: 27.0767% ( 32) 00:07:39.148 9175.040 - 9225.452: 27.3635% ( 29) 00:07:39.148 9225.452 - 9275.865: 27.6404% ( 28) 00:07:39.148 9275.865 - 9326.277: 27.9866% ( 35) 00:07:39.148 9326.277 - 9376.689: 28.3129% ( 33) 00:07:39.148 9376.689 - 9427.102: 28.6392% ( 33) 00:07:39.148 9427.102 - 9477.514: 28.9656% ( 33) 00:07:39.148 9477.514 - 9527.926: 29.3315% ( 37) 00:07:39.148 9527.926 - 9578.338: 29.7369% ( 41) 00:07:39.148 9578.338 - 9628.751: 30.1523% ( 42) 00:07:39.148 9628.751 - 9679.163: 30.5380% ( 39) 00:07:39.148 9679.163 - 9729.575: 30.8940% ( 36) 00:07:39.148 9729.575 - 9779.988: 31.2203% ( 33) 00:07:39.148 9779.988 - 9830.400: 31.5170% ( 30) 00:07:39.148 9830.400 - 9880.812: 31.7642% ( 25) 00:07:39.148 9880.812 - 9931.225: 31.9818% ( 22) 00:07:39.148 9931.225 - 9981.637: 32.1994% ( 22) 00:07:39.148 9981.637 - 10032.049: 32.4070% ( 21) 00:07:39.148 10032.049 - 10082.462: 32.6048% ( 20) 00:07:39.148 10082.462 - 10132.874: 32.8026% ( 20) 00:07:39.148 10132.874 - 10183.286: 32.9707% ( 17) 00:07:39.148 10183.286 - 10233.698: 33.1586% ( 19) 00:07:39.148 10233.698 - 10284.111: 33.2872% ( 13) 00:07:39.148 10284.111 - 10334.523: 33.4256% ( 14) 00:07:39.148 10334.523 - 10384.935: 33.5542% ( 13) 00:07:39.148 10384.935 - 10435.348: 33.6531% ( 10) 00:07:39.148 10435.348 - 10485.760: 33.7915% ( 14) 00:07:39.148 10485.760 - 10536.172: 33.8904% ( 10) 00:07:39.148 10536.172 - 10586.585: 34.0190% ( 13) 00:07:39.148 10586.585 - 10636.997: 34.1475% ( 13) 00:07:39.148 10636.997 - 10687.409: 34.3157% ( 17) 00:07:39.148 10687.409 - 10737.822: 34.4541% ( 14) 00:07:39.148 10737.822 - 10788.234: 34.6222% ( 17) 00:07:39.148 10788.234 - 10838.646: 34.8101% ( 19) 00:07:39.148 10838.646 - 10889.058: 34.9585% ( 15) 00:07:39.148 10889.058 - 10939.471: 35.0969% ( 14) 00:07:39.148 10939.471 - 10989.883: 35.2848% ( 19) 00:07:39.148 10989.883 - 11040.295: 35.4331% ( 15) 00:07:39.148 11040.295 - 11090.708: 35.6210% ( 19) 00:07:39.148 11090.708 - 11141.120: 35.7595% ( 14) 00:07:39.148 11141.120 - 11191.532: 35.9968% ( 24) 00:07:39.148 11191.532 - 11241.945: 36.2342% ( 24) 00:07:39.148 11241.945 - 11292.357: 36.4320% ( 20) 00:07:39.148 11292.357 - 11342.769: 36.6990% ( 27) 00:07:39.148 11342.769 - 11393.182: 37.0154% ( 32) 00:07:39.148 11393.182 - 11443.594: 37.3418% ( 33) 00:07:39.148 11443.594 - 11494.006: 37.6681% ( 33) 00:07:39.148 11494.006 - 11544.418: 38.0340% ( 37) 00:07:39.148 11544.418 - 11594.831: 38.3801% ( 35) 00:07:39.148 11594.831 - 11645.243: 38.7164% ( 34) 00:07:39.148 11645.243 - 11695.655: 39.0229% ( 31) 00:07:39.148 11695.655 - 11746.068: 39.2801% ( 26) 00:07:39.148 11746.068 - 11796.480: 39.6361% ( 36) 00:07:39.148 11796.480 - 11846.892: 39.9328% ( 30) 00:07:39.148 11846.892 - 11897.305: 40.1998% ( 27) 00:07:39.148 11897.305 - 11947.717: 40.5360% ( 34) 00:07:39.148 11947.717 - 11998.129: 40.7733% ( 24) 00:07:39.148 11998.129 - 12048.542: 41.0305% ( 26) 00:07:39.148 12048.542 - 12098.954: 41.2975% ( 27) 00:07:39.148 12098.954 - 12149.366: 41.5546% ( 26) 00:07:39.148 12149.366 - 12199.778: 41.8018% ( 25) 00:07:39.148 12199.778 - 12250.191: 42.1875% ( 39) 00:07:39.148 12250.191 - 12300.603: 42.5336% ( 35) 00:07:39.148 12300.603 - 12351.015: 42.9885% ( 46) 00:07:39.148 12351.015 - 12401.428: 43.3445% ( 36) 00:07:39.148 12401.428 - 12451.840: 43.7896% ( 45) 00:07:39.148 12451.840 - 12502.252: 44.2642% ( 48) 00:07:39.148 12502.252 - 12552.665: 44.8873% ( 63) 00:07:39.148 12552.665 - 12603.077: 45.4707% ( 59) 00:07:39.148 12603.077 - 12653.489: 46.0443% ( 58) 00:07:39.148 12653.489 - 12703.902: 46.7464% ( 71) 00:07:39.148 12703.902 - 12754.314: 47.3991% ( 66) 00:07:39.148 12754.314 - 12804.726: 48.0518% ( 66) 00:07:39.148 12804.726 - 12855.138: 48.7737% ( 73) 00:07:39.148 12855.138 - 12905.551: 49.4561% ( 69) 00:07:39.148 12905.551 - 13006.375: 50.9395% ( 150) 00:07:39.148 13006.375 - 13107.200: 52.3635% ( 144) 00:07:39.148 13107.200 - 13208.025: 53.7777% ( 143) 00:07:39.148 13208.025 - 13308.849: 55.1622% ( 140) 00:07:39.148 13308.849 - 13409.674: 56.3884% ( 124) 00:07:39.148 13409.674 - 13510.498: 57.4466% ( 107) 00:07:39.148 13510.498 - 13611.323: 58.1982% ( 76) 00:07:39.148 13611.323 - 13712.148: 58.9597% ( 77) 00:07:39.148 13712.148 - 13812.972: 59.6717% ( 72) 00:07:39.148 13812.972 - 13913.797: 60.3244% ( 66) 00:07:39.148 13913.797 - 14014.622: 61.0858% ( 77) 00:07:39.148 14014.622 - 14115.446: 61.6792% ( 60) 00:07:39.148 14115.446 - 14216.271: 62.2824% ( 61) 00:07:39.148 14216.271 - 14317.095: 62.8461% ( 57) 00:07:39.148 14317.095 - 14417.920: 63.3505% ( 51) 00:07:39.148 14417.920 - 14518.745: 63.9043% ( 56) 00:07:39.148 14518.745 - 14619.569: 64.4086% ( 51) 00:07:39.148 14619.569 - 14720.394: 64.8141% ( 41) 00:07:39.148 14720.394 - 14821.218: 65.2690% ( 46) 00:07:39.148 14821.218 - 14922.043: 65.6448% ( 38) 00:07:39.148 14922.043 - 15022.868: 66.0305% ( 39) 00:07:39.148 15022.868 - 15123.692: 66.5843% ( 56) 00:07:39.148 15123.692 - 15224.517: 67.1282% ( 55) 00:07:39.148 15224.517 - 15325.342: 67.6919% ( 57) 00:07:39.148 15325.342 - 15426.166: 68.3050% ( 62) 00:07:39.148 15426.166 - 15526.991: 69.2148% ( 92) 00:07:39.148 15526.991 - 15627.815: 70.0455% ( 84) 00:07:39.148 15627.815 - 15728.640: 70.9949% ( 96) 00:07:39.148 15728.640 - 15829.465: 72.2013% ( 122) 00:07:39.148 15829.465 - 15930.289: 73.2397% ( 105) 00:07:39.148 15930.289 - 16031.114: 74.1594% ( 93) 00:07:39.148 16031.114 - 16131.938: 75.4055% ( 126) 00:07:39.148 16131.938 - 16232.763: 76.6812% ( 129) 00:07:39.148 16232.763 - 16333.588: 77.6899% ( 102) 00:07:39.148 16333.588 - 16434.412: 78.5898% ( 91) 00:07:39.148 16434.412 - 16535.237: 79.4600% ( 88) 00:07:39.148 16535.237 - 16636.062: 80.2116% ( 76) 00:07:39.148 16636.062 - 16736.886: 80.9237% ( 72) 00:07:39.148 16736.886 - 16837.711: 81.7049% ( 79) 00:07:39.148 16837.711 - 16938.535: 82.4268% ( 73) 00:07:39.148 16938.535 - 17039.360: 83.2180% ( 80) 00:07:39.148 17039.360 - 17140.185: 83.8212% ( 61) 00:07:39.148 17140.185 - 17241.009: 84.5431% ( 73) 00:07:39.148 17241.009 - 17341.834: 85.2650% ( 73) 00:07:39.148 17341.834 - 17442.658: 85.9672% ( 71) 00:07:39.148 17442.658 - 17543.483: 86.6199% ( 66) 00:07:39.148 17543.483 - 17644.308: 87.2725% ( 66) 00:07:39.148 17644.308 - 17745.132: 87.9648% ( 70) 00:07:39.148 17745.132 - 17845.957: 88.9241% ( 97) 00:07:39.148 17845.957 - 17946.782: 89.8141% ( 90) 00:07:39.148 17946.782 - 18047.606: 90.6448% ( 84) 00:07:39.148 18047.606 - 18148.431: 91.4161% ( 78) 00:07:39.148 18148.431 - 18249.255: 92.1479% ( 74) 00:07:39.148 18249.255 - 18350.080: 92.8105% ( 67) 00:07:39.148 18350.080 - 18450.905: 93.3742% ( 57) 00:07:39.148 18450.905 - 18551.729: 94.0170% ( 65) 00:07:39.148 18551.729 - 18652.554: 94.6499% ( 64) 00:07:39.148 18652.554 - 18753.378: 95.0850% ( 44) 00:07:39.148 18753.378 - 18854.203: 95.4213% ( 34) 00:07:39.148 18854.203 - 18955.028: 95.7180% ( 30) 00:07:39.148 18955.028 - 19055.852: 96.0542% ( 34) 00:07:39.148 19055.852 - 19156.677: 96.3410% ( 29) 00:07:39.148 19156.677 - 19257.502: 96.6871% ( 35) 00:07:39.148 19257.502 - 19358.326: 97.0233% ( 34) 00:07:39.148 19358.326 - 19459.151: 97.3497% ( 33) 00:07:39.148 19459.151 - 19559.975: 97.6562% ( 31) 00:07:39.148 19559.975 - 19660.800: 97.9233% ( 27) 00:07:39.148 19660.800 - 19761.625: 98.1804% ( 26) 00:07:39.148 19761.625 - 19862.449: 98.4078% ( 23) 00:07:39.149 19862.449 - 19963.274: 98.5661% ( 16) 00:07:39.149 19963.274 - 20064.098: 98.6452% ( 8) 00:07:39.149 20064.098 - 20164.923: 98.7243% ( 8) 00:07:39.149 20164.923 - 20265.748: 98.7342% ( 1) 00:07:39.149 28835.840 - 29037.489: 98.7638% ( 3) 00:07:39.149 29037.489 - 29239.138: 98.8430% ( 8) 00:07:39.149 29239.138 - 29440.788: 98.9122% ( 7) 00:07:39.149 29440.788 - 29642.437: 98.9913% ( 8) 00:07:39.149 29642.437 - 29844.086: 99.0803% ( 9) 00:07:39.149 29844.086 - 30045.735: 99.1693% ( 9) 00:07:39.149 30045.735 - 30247.385: 99.2583% ( 9) 00:07:39.149 30247.385 - 30449.034: 99.3374% ( 8) 00:07:39.149 30449.034 - 30650.683: 99.3671% ( 3) 00:07:39.149 38515.003 - 38716.652: 99.4066% ( 4) 00:07:39.149 38716.652 - 38918.302: 99.4858% ( 8) 00:07:39.149 38918.302 - 39119.951: 99.5649% ( 8) 00:07:39.149 39119.951 - 39321.600: 99.6539% ( 9) 00:07:39.149 39321.600 - 39523.249: 99.7330% ( 8) 00:07:39.149 39523.249 - 39724.898: 99.8220% ( 9) 00:07:39.149 39724.898 - 39926.548: 99.9110% ( 9) 00:07:39.149 39926.548 - 40128.197: 99.9901% ( 8) 00:07:39.149 40128.197 - 40329.846: 100.0000% ( 1) 00:07:39.149 00:07:39.149 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:39.149 ============================================================================== 00:07:39.149 Range in us Cumulative IO count 00:07:39.149 5444.529 - 5469.735: 0.0297% ( 3) 00:07:39.149 5469.735 - 5494.942: 0.0692% ( 4) 00:07:39.149 5494.942 - 5520.148: 0.1483% ( 8) 00:07:39.149 5520.148 - 5545.354: 0.2472% ( 10) 00:07:39.149 5545.354 - 5570.560: 0.3362% ( 9) 00:07:39.149 5570.560 - 5595.766: 0.4450% ( 11) 00:07:39.149 5595.766 - 5620.972: 0.5736% ( 13) 00:07:39.149 5620.972 - 5646.178: 0.7318% ( 16) 00:07:39.149 5646.178 - 5671.385: 0.8801% ( 15) 00:07:39.149 5671.385 - 5696.591: 1.1570% ( 28) 00:07:39.149 5696.591 - 5721.797: 1.3548% ( 20) 00:07:39.149 5721.797 - 5747.003: 1.6614% ( 31) 00:07:39.149 5747.003 - 5772.209: 2.0767% ( 42) 00:07:39.149 5772.209 - 5797.415: 2.3734% ( 30) 00:07:39.149 5797.415 - 5822.622: 2.7097% ( 34) 00:07:39.149 5822.622 - 5847.828: 3.0558% ( 35) 00:07:39.149 5847.828 - 5873.034: 3.3722% ( 32) 00:07:39.149 5873.034 - 5898.240: 3.7579% ( 39) 00:07:39.149 5898.240 - 5923.446: 4.1634% ( 41) 00:07:39.149 5923.446 - 5948.652: 4.5589% ( 40) 00:07:39.149 5948.652 - 5973.858: 4.9248% ( 37) 00:07:39.149 5973.858 - 5999.065: 5.3600% ( 44) 00:07:39.149 5999.065 - 6024.271: 5.8544% ( 50) 00:07:39.149 6024.271 - 6049.477: 6.3588% ( 51) 00:07:39.149 6049.477 - 6074.683: 6.8236% ( 47) 00:07:39.149 6074.683 - 6099.889: 7.2686% ( 45) 00:07:39.149 6099.889 - 6125.095: 7.7235% ( 46) 00:07:39.149 6125.095 - 6150.302: 8.2081% ( 49) 00:07:39.149 6150.302 - 6175.508: 8.6531% ( 45) 00:07:39.149 6175.508 - 6200.714: 9.1673% ( 52) 00:07:39.149 6200.714 - 6225.920: 9.6222% ( 46) 00:07:39.149 6225.920 - 6251.126: 10.0672% ( 45) 00:07:39.149 6251.126 - 6276.332: 10.5518% ( 49) 00:07:39.149 6276.332 - 6301.538: 10.9672% ( 42) 00:07:39.149 6301.538 - 6326.745: 11.3726% ( 41) 00:07:39.149 6326.745 - 6351.951: 11.7583% ( 39) 00:07:39.149 6351.951 - 6377.157: 12.1440% ( 39) 00:07:39.149 6377.157 - 6402.363: 12.5297% ( 39) 00:07:39.149 6402.363 - 6427.569: 12.8560% ( 33) 00:07:39.149 6427.569 - 6452.775: 13.2120% ( 36) 00:07:39.149 6452.775 - 6503.188: 13.7955% ( 59) 00:07:39.149 6503.188 - 6553.600: 14.3790% ( 59) 00:07:39.149 6553.600 - 6604.012: 14.8932% ( 52) 00:07:39.149 6604.012 - 6654.425: 15.3481% ( 46) 00:07:39.149 6654.425 - 6704.837: 15.7437% ( 40) 00:07:39.149 6704.837 - 6755.249: 16.0997% ( 36) 00:07:39.149 6755.249 - 6805.662: 16.3370% ( 24) 00:07:39.149 6805.662 - 6856.074: 16.5744% ( 24) 00:07:39.149 6856.074 - 6906.486: 16.8117% ( 24) 00:07:39.149 6906.486 - 6956.898: 16.9996% ( 19) 00:07:39.149 6956.898 - 7007.311: 17.1974% ( 20) 00:07:39.149 7007.311 - 7057.723: 17.3853% ( 19) 00:07:39.149 7057.723 - 7108.135: 17.5336% ( 15) 00:07:39.149 7108.135 - 7158.548: 17.6919% ( 16) 00:07:39.149 7158.548 - 7208.960: 17.8501% ( 16) 00:07:39.149 7208.960 - 7259.372: 18.0281% ( 18) 00:07:39.149 7259.372 - 7309.785: 18.1962% ( 17) 00:07:39.149 7309.785 - 7360.197: 18.3841% ( 19) 00:07:39.149 7360.197 - 7410.609: 18.5423% ( 16) 00:07:39.149 7410.609 - 7461.022: 18.7797% ( 24) 00:07:39.149 7461.022 - 7511.434: 18.9082% ( 13) 00:07:39.149 7511.434 - 7561.846: 19.0665% ( 16) 00:07:39.149 7561.846 - 7612.258: 19.2741% ( 21) 00:07:39.149 7612.258 - 7662.671: 19.5510% ( 28) 00:07:39.149 7662.671 - 7713.083: 19.7983% ( 25) 00:07:39.149 7713.083 - 7763.495: 20.1741% ( 38) 00:07:39.149 7763.495 - 7813.908: 20.4213% ( 25) 00:07:39.149 7813.908 - 7864.320: 20.7081% ( 29) 00:07:39.149 7864.320 - 7914.732: 21.0047% ( 30) 00:07:39.149 7914.732 - 7965.145: 21.2816% ( 28) 00:07:39.149 7965.145 - 8015.557: 21.5289% ( 25) 00:07:39.149 8015.557 - 8065.969: 21.8453% ( 32) 00:07:39.149 8065.969 - 8116.382: 22.2805% ( 44) 00:07:39.149 8116.382 - 8166.794: 22.5771% ( 30) 00:07:39.149 8166.794 - 8217.206: 22.9331% ( 36) 00:07:39.149 8217.206 - 8267.618: 23.2496% ( 32) 00:07:39.149 8267.618 - 8318.031: 23.6155% ( 37) 00:07:39.149 8318.031 - 8368.443: 23.9320% ( 32) 00:07:39.149 8368.443 - 8418.855: 24.1990% ( 27) 00:07:39.149 8418.855 - 8469.268: 24.4363% ( 24) 00:07:39.149 8469.268 - 8519.680: 24.6737% ( 24) 00:07:39.149 8519.680 - 8570.092: 24.9110% ( 24) 00:07:39.149 8570.092 - 8620.505: 25.1088% ( 20) 00:07:39.149 8620.505 - 8670.917: 25.3066% ( 20) 00:07:39.149 8670.917 - 8721.329: 25.4648% ( 16) 00:07:39.149 8721.329 - 8771.742: 25.6922% ( 23) 00:07:39.149 8771.742 - 8822.154: 25.9197% ( 23) 00:07:39.149 8822.154 - 8872.566: 26.0977% ( 18) 00:07:39.149 8872.566 - 8922.978: 26.3746% ( 28) 00:07:39.149 8922.978 - 8973.391: 26.6021% ( 23) 00:07:39.149 8973.391 - 9023.803: 26.8295% ( 23) 00:07:39.149 9023.803 - 9074.215: 27.1163% ( 29) 00:07:39.149 9074.215 - 9124.628: 27.3932% ( 28) 00:07:39.149 9124.628 - 9175.040: 27.6503% ( 26) 00:07:39.149 9175.040 - 9225.452: 27.8975% ( 25) 00:07:39.149 9225.452 - 9275.865: 28.1843% ( 29) 00:07:39.149 9275.865 - 9326.277: 28.4415% ( 26) 00:07:39.149 9326.277 - 9376.689: 28.6887% ( 25) 00:07:39.149 9376.689 - 9427.102: 28.9458% ( 26) 00:07:39.149 9427.102 - 9477.514: 29.2029% ( 26) 00:07:39.149 9477.514 - 9527.926: 29.4600% ( 26) 00:07:39.149 9527.926 - 9578.338: 29.6974% ( 24) 00:07:39.149 9578.338 - 9628.751: 29.9051% ( 21) 00:07:39.149 9628.751 - 9679.163: 30.1127% ( 21) 00:07:39.149 9679.163 - 9729.575: 30.3204% ( 21) 00:07:39.149 9729.575 - 9779.988: 30.6566% ( 34) 00:07:39.149 9779.988 - 9830.400: 30.9237% ( 27) 00:07:39.149 9830.400 - 9880.812: 31.1214% ( 20) 00:07:39.149 9880.812 - 9931.225: 31.2698% ( 15) 00:07:39.149 9931.225 - 9981.637: 31.4280% ( 16) 00:07:39.149 9981.637 - 10032.049: 31.5269% ( 10) 00:07:39.149 10032.049 - 10082.462: 31.6752% ( 15) 00:07:39.149 10082.462 - 10132.874: 31.8730% ( 20) 00:07:39.149 10132.874 - 10183.286: 32.1598% ( 29) 00:07:39.149 10183.286 - 10233.698: 32.3477% ( 19) 00:07:39.149 10233.698 - 10284.111: 32.5059% ( 16) 00:07:39.149 10284.111 - 10334.523: 32.7136% ( 21) 00:07:39.149 10334.523 - 10384.935: 32.9015% ( 19) 00:07:39.149 10384.935 - 10435.348: 33.1290% ( 23) 00:07:39.149 10435.348 - 10485.760: 33.3465% ( 22) 00:07:39.149 10485.760 - 10536.172: 33.4949% ( 15) 00:07:39.149 10536.172 - 10586.585: 33.6333% ( 14) 00:07:39.149 10586.585 - 10636.997: 33.7816% ( 15) 00:07:39.149 10636.997 - 10687.409: 33.9992% ( 22) 00:07:39.149 10687.409 - 10737.822: 34.1970% ( 20) 00:07:39.149 10737.822 - 10788.234: 34.3948% ( 20) 00:07:39.149 10788.234 - 10838.646: 34.5926% ( 20) 00:07:39.149 10838.646 - 10889.058: 34.8794% ( 29) 00:07:39.149 10889.058 - 10939.471: 35.1661% ( 29) 00:07:39.149 10939.471 - 10989.883: 35.4035% ( 24) 00:07:39.149 10989.883 - 11040.295: 35.6606% ( 26) 00:07:39.149 11040.295 - 11090.708: 35.8683% ( 21) 00:07:39.149 11090.708 - 11141.120: 36.0463% ( 18) 00:07:39.149 11141.120 - 11191.532: 36.2540% ( 21) 00:07:39.149 11191.532 - 11241.945: 36.5111% ( 26) 00:07:39.149 11241.945 - 11292.357: 36.7781% ( 27) 00:07:39.149 11292.357 - 11342.769: 37.0154% ( 24) 00:07:39.149 11342.769 - 11393.182: 37.2528% ( 24) 00:07:39.149 11393.182 - 11443.594: 37.5297% ( 28) 00:07:39.149 11443.594 - 11494.006: 37.8659% ( 34) 00:07:39.149 11494.006 - 11544.418: 38.1922% ( 33) 00:07:39.149 11544.418 - 11594.831: 38.6274% ( 44) 00:07:39.149 11594.831 - 11645.243: 39.0229% ( 40) 00:07:39.149 11645.243 - 11695.655: 39.4778% ( 46) 00:07:39.149 11695.655 - 11746.068: 39.8833% ( 41) 00:07:39.149 11746.068 - 11796.480: 40.2294% ( 35) 00:07:39.149 11796.480 - 11846.892: 40.6744% ( 45) 00:07:39.149 11846.892 - 11897.305: 41.0601% ( 39) 00:07:39.149 11897.305 - 11947.717: 41.3865% ( 33) 00:07:39.149 11947.717 - 11998.129: 41.7128% ( 33) 00:07:39.149 11998.129 - 12048.542: 42.0787% ( 37) 00:07:39.149 12048.542 - 12098.954: 42.4941% ( 42) 00:07:39.149 12098.954 - 12149.366: 42.8896% ( 40) 00:07:39.149 12149.366 - 12199.778: 43.3248% ( 44) 00:07:39.149 12199.778 - 12250.191: 43.7203% ( 40) 00:07:39.149 12250.191 - 12300.603: 44.1555% ( 44) 00:07:39.149 12300.603 - 12351.015: 44.6203% ( 47) 00:07:39.149 12351.015 - 12401.428: 45.1246% ( 51) 00:07:39.149 12401.428 - 12451.840: 45.6092% ( 49) 00:07:39.150 12451.840 - 12502.252: 46.0740% ( 47) 00:07:39.150 12502.252 - 12552.665: 46.5487% ( 48) 00:07:39.150 12552.665 - 12603.077: 46.9838% ( 44) 00:07:39.150 12603.077 - 12653.489: 47.5376% ( 56) 00:07:39.150 12653.489 - 12703.902: 48.0320% ( 50) 00:07:39.150 12703.902 - 12754.314: 48.5364% ( 51) 00:07:39.150 12754.314 - 12804.726: 49.0210% ( 49) 00:07:39.150 12804.726 - 12855.138: 49.6143% ( 60) 00:07:39.150 12855.138 - 12905.551: 50.2670% ( 66) 00:07:39.150 12905.551 - 13006.375: 51.4438% ( 119) 00:07:39.150 13006.375 - 13107.200: 52.6305% ( 120) 00:07:39.150 13107.200 - 13208.025: 53.7876% ( 117) 00:07:39.150 13208.025 - 13308.849: 54.9644% ( 119) 00:07:39.150 13308.849 - 13409.674: 56.0522% ( 110) 00:07:39.150 13409.674 - 13510.498: 56.9818% ( 94) 00:07:39.150 13510.498 - 13611.323: 57.9213% ( 95) 00:07:39.150 13611.323 - 13712.148: 58.7322% ( 82) 00:07:39.150 13712.148 - 13812.972: 59.5134% ( 79) 00:07:39.150 13812.972 - 13913.797: 60.2354% ( 73) 00:07:39.150 13913.797 - 14014.622: 60.9573% ( 73) 00:07:39.150 14014.622 - 14115.446: 61.7781% ( 83) 00:07:39.150 14115.446 - 14216.271: 62.4308% ( 66) 00:07:39.150 14216.271 - 14317.095: 63.1230% ( 70) 00:07:39.150 14317.095 - 14417.920: 63.6966% ( 58) 00:07:39.150 14417.920 - 14518.745: 64.2702% ( 58) 00:07:39.150 14518.745 - 14619.569: 64.7844% ( 52) 00:07:39.150 14619.569 - 14720.394: 65.3877% ( 61) 00:07:39.150 14720.394 - 14821.218: 66.0008% ( 62) 00:07:39.150 14821.218 - 14922.043: 66.5645% ( 57) 00:07:39.150 14922.043 - 15022.868: 67.1381% ( 58) 00:07:39.150 15022.868 - 15123.692: 67.6820% ( 55) 00:07:39.150 15123.692 - 15224.517: 68.2456% ( 57) 00:07:39.150 15224.517 - 15325.342: 68.7896% ( 55) 00:07:39.150 15325.342 - 15426.166: 69.4324% ( 65) 00:07:39.150 15426.166 - 15526.991: 70.2532% ( 83) 00:07:39.150 15526.991 - 15627.815: 71.1630% ( 92) 00:07:39.150 15627.815 - 15728.640: 71.7464% ( 59) 00:07:39.150 15728.640 - 15829.465: 72.5079% ( 77) 00:07:39.150 15829.465 - 15930.289: 73.4375% ( 94) 00:07:39.150 15930.289 - 16031.114: 74.2781% ( 85) 00:07:39.150 16031.114 - 16131.938: 75.1780% ( 91) 00:07:39.150 16131.938 - 16232.763: 76.0186% ( 85) 00:07:39.150 16232.763 - 16333.588: 76.9185% ( 91) 00:07:39.150 16333.588 - 16434.412: 77.7591% ( 85) 00:07:39.150 16434.412 - 16535.237: 78.5107% ( 76) 00:07:39.150 16535.237 - 16636.062: 79.1930% ( 69) 00:07:39.150 16636.062 - 16736.886: 79.8853% ( 70) 00:07:39.150 16736.886 - 16837.711: 80.5874% ( 71) 00:07:39.150 16837.711 - 16938.535: 81.2104% ( 63) 00:07:39.150 16938.535 - 17039.360: 81.8434% ( 64) 00:07:39.150 17039.360 - 17140.185: 82.6147% ( 78) 00:07:39.150 17140.185 - 17241.009: 83.3465% ( 74) 00:07:39.150 17241.009 - 17341.834: 84.0289% ( 69) 00:07:39.150 17341.834 - 17442.658: 84.7211% ( 70) 00:07:39.150 17442.658 - 17543.483: 85.4925% ( 78) 00:07:39.150 17543.483 - 17644.308: 86.3726% ( 89) 00:07:39.150 17644.308 - 17745.132: 87.3121% ( 95) 00:07:39.150 17745.132 - 17845.957: 88.1626% ( 86) 00:07:39.150 17845.957 - 17946.782: 89.0131% ( 86) 00:07:39.150 17946.782 - 18047.606: 89.8536% ( 85) 00:07:39.150 18047.606 - 18148.431: 90.4964% ( 65) 00:07:39.150 18148.431 - 18249.255: 91.1294% ( 64) 00:07:39.150 18249.255 - 18350.080: 91.8513% ( 73) 00:07:39.150 18350.080 - 18450.905: 92.6028% ( 76) 00:07:39.150 18450.905 - 18551.729: 93.3445% ( 75) 00:07:39.150 18551.729 - 18652.554: 94.0467% ( 71) 00:07:39.150 18652.554 - 18753.378: 94.6499% ( 61) 00:07:39.150 18753.378 - 18854.203: 95.1642% ( 52) 00:07:39.150 18854.203 - 18955.028: 95.6191% ( 46) 00:07:39.150 18955.028 - 19055.852: 96.0641% ( 45) 00:07:39.150 19055.852 - 19156.677: 96.5190% ( 46) 00:07:39.150 19156.677 - 19257.502: 96.8849% ( 37) 00:07:39.150 19257.502 - 19358.326: 97.2607% ( 38) 00:07:39.150 19358.326 - 19459.151: 97.5969% ( 34) 00:07:39.150 19459.151 - 19559.975: 97.9134% ( 32) 00:07:39.150 19559.975 - 19660.800: 98.2298% ( 32) 00:07:39.150 19660.800 - 19761.625: 98.4375% ( 21) 00:07:39.150 19761.625 - 19862.449: 98.5759% ( 14) 00:07:39.150 19862.449 - 19963.274: 98.6748% ( 10) 00:07:39.150 19963.274 - 20064.098: 98.7243% ( 5) 00:07:39.150 20064.098 - 20164.923: 98.7342% ( 1) 00:07:39.150 27827.594 - 28029.243: 98.8133% ( 8) 00:07:39.150 28029.243 - 28230.892: 98.8924% ( 8) 00:07:39.150 28230.892 - 28432.542: 98.9715% ( 8) 00:07:39.150 28432.542 - 28634.191: 99.0605% ( 9) 00:07:39.150 28634.191 - 28835.840: 99.1297% ( 7) 00:07:39.150 28835.840 - 29037.489: 99.2089% ( 8) 00:07:39.150 29037.489 - 29239.138: 99.2880% ( 8) 00:07:39.150 29239.138 - 29440.788: 99.3671% ( 8) 00:07:39.150 37708.406 - 37910.055: 99.3968% ( 3) 00:07:39.150 37910.055 - 38111.705: 99.4363% ( 4) 00:07:39.150 38111.705 - 38313.354: 99.5253% ( 9) 00:07:39.150 38313.354 - 38515.003: 99.5945% ( 7) 00:07:39.150 38515.003 - 38716.652: 99.6835% ( 9) 00:07:39.150 38716.652 - 38918.302: 99.7725% ( 9) 00:07:39.150 38918.302 - 39119.951: 99.8517% ( 8) 00:07:39.150 39119.951 - 39321.600: 99.9407% ( 9) 00:07:39.150 39321.600 - 39523.249: 100.0000% ( 6) 00:07:39.150 00:07:39.150 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:39.150 ============================================================================== 00:07:39.150 Range in us Cumulative IO count 00:07:39.150 5368.911 - 5394.117: 0.0297% ( 3) 00:07:39.150 5394.117 - 5419.323: 0.0396% ( 1) 00:07:39.150 5419.323 - 5444.529: 0.0890% ( 5) 00:07:39.150 5444.529 - 5469.735: 0.1681% ( 8) 00:07:39.150 5469.735 - 5494.942: 0.2176% ( 5) 00:07:39.150 5494.942 - 5520.148: 0.3659% ( 15) 00:07:39.150 5520.148 - 5545.354: 0.4747% ( 11) 00:07:39.150 5545.354 - 5570.560: 0.5736% ( 10) 00:07:39.150 5570.560 - 5595.766: 0.7417% ( 17) 00:07:39.150 5595.766 - 5620.972: 0.9296% ( 19) 00:07:39.150 5620.972 - 5646.178: 1.0680% ( 14) 00:07:39.150 5646.178 - 5671.385: 1.2955% ( 23) 00:07:39.150 5671.385 - 5696.591: 1.4834% ( 19) 00:07:39.150 5696.591 - 5721.797: 1.8097% ( 33) 00:07:39.150 5721.797 - 5747.003: 2.1460% ( 34) 00:07:39.150 5747.003 - 5772.209: 2.4229% ( 28) 00:07:39.150 5772.209 - 5797.415: 2.8283% ( 41) 00:07:39.150 5797.415 - 5822.622: 3.1448% ( 32) 00:07:39.150 5822.622 - 5847.828: 3.5799% ( 44) 00:07:39.150 5847.828 - 5873.034: 3.8172% ( 24) 00:07:39.150 5873.034 - 5898.240: 4.2326% ( 42) 00:07:39.150 5898.240 - 5923.446: 4.5491% ( 32) 00:07:39.150 5923.446 - 5948.652: 4.9545% ( 41) 00:07:39.150 5948.652 - 5973.858: 5.3303% ( 38) 00:07:39.150 5973.858 - 5999.065: 5.7358% ( 41) 00:07:39.150 5999.065 - 6024.271: 6.1610% ( 43) 00:07:39.150 6024.271 - 6049.477: 6.5961% ( 44) 00:07:39.150 6049.477 - 6074.683: 7.0214% ( 43) 00:07:39.150 6074.683 - 6099.889: 7.4862% ( 47) 00:07:39.150 6099.889 - 6125.095: 7.8817% ( 40) 00:07:39.150 6125.095 - 6150.302: 8.2674% ( 39) 00:07:39.150 6150.302 - 6175.508: 8.7025% ( 44) 00:07:39.150 6175.508 - 6200.714: 9.2069% ( 51) 00:07:39.150 6200.714 - 6225.920: 9.6717% ( 47) 00:07:39.150 6225.920 - 6251.126: 10.1562% ( 49) 00:07:39.150 6251.126 - 6276.332: 10.5419% ( 39) 00:07:39.150 6276.332 - 6301.538: 10.9573% ( 42) 00:07:39.150 6301.538 - 6326.745: 11.3528% ( 40) 00:07:39.150 6326.745 - 6351.951: 11.7385% ( 39) 00:07:39.150 6351.951 - 6377.157: 12.1242% ( 39) 00:07:39.150 6377.157 - 6402.363: 12.4308% ( 31) 00:07:39.150 6402.363 - 6427.569: 12.8560% ( 43) 00:07:39.150 6427.569 - 6452.775: 13.1824% ( 33) 00:07:39.150 6452.775 - 6503.188: 13.8252% ( 65) 00:07:39.150 6503.188 - 6553.600: 14.4086% ( 59) 00:07:39.150 6553.600 - 6604.012: 14.9624% ( 56) 00:07:39.150 6604.012 - 6654.425: 15.4074% ( 45) 00:07:39.150 6654.425 - 6704.837: 15.8129% ( 41) 00:07:39.150 6704.837 - 6755.249: 16.1689% ( 36) 00:07:39.150 6755.249 - 6805.662: 16.4458% ( 28) 00:07:39.150 6805.662 - 6856.074: 16.7722% ( 33) 00:07:39.150 6856.074 - 6906.486: 17.0095% ( 24) 00:07:39.150 6906.486 - 6956.898: 17.2073% ( 20) 00:07:39.150 6956.898 - 7007.311: 17.3754% ( 17) 00:07:39.150 7007.311 - 7057.723: 17.5732% ( 20) 00:07:39.150 7057.723 - 7108.135: 17.7413% ( 17) 00:07:39.150 7108.135 - 7158.548: 17.9094% ( 17) 00:07:39.150 7158.548 - 7208.960: 18.1270% ( 22) 00:07:39.150 7208.960 - 7259.372: 18.2852% ( 16) 00:07:39.150 7259.372 - 7309.785: 18.4335% ( 15) 00:07:39.150 7309.785 - 7360.197: 18.6116% ( 18) 00:07:39.150 7360.197 - 7410.609: 18.8093% ( 20) 00:07:39.150 7410.609 - 7461.022: 18.9775% ( 17) 00:07:39.150 7461.022 - 7511.434: 19.1752% ( 20) 00:07:39.150 7511.434 - 7561.846: 19.3532% ( 18) 00:07:39.150 7561.846 - 7612.258: 19.5115% ( 16) 00:07:39.150 7612.258 - 7662.671: 19.7191% ( 21) 00:07:39.150 7662.671 - 7713.083: 19.9960% ( 28) 00:07:39.150 7713.083 - 7763.495: 20.2037% ( 21) 00:07:39.150 7763.495 - 7813.908: 20.4608% ( 26) 00:07:39.150 7813.908 - 7864.320: 20.7081% ( 25) 00:07:39.150 7864.320 - 7914.732: 21.0245% ( 32) 00:07:39.150 7914.732 - 7965.145: 21.2816% ( 26) 00:07:39.150 7965.145 - 8015.557: 21.5289% ( 25) 00:07:39.150 8015.557 - 8065.969: 21.8157% ( 29) 00:07:39.150 8065.969 - 8116.382: 22.0629% ( 25) 00:07:39.150 8116.382 - 8166.794: 22.3794% ( 32) 00:07:39.150 8166.794 - 8217.206: 22.7156% ( 34) 00:07:39.150 8217.206 - 8267.618: 23.0320% ( 32) 00:07:39.150 8267.618 - 8318.031: 23.3782% ( 35) 00:07:39.150 8318.031 - 8368.443: 23.7144% ( 34) 00:07:39.150 8368.443 - 8418.855: 23.9715% ( 26) 00:07:39.150 8418.855 - 8469.268: 24.2089% ( 24) 00:07:39.151 8469.268 - 8519.680: 24.4363% ( 23) 00:07:39.151 8519.680 - 8570.092: 24.7231% ( 29) 00:07:39.151 8570.092 - 8620.505: 24.9407% ( 22) 00:07:39.151 8620.505 - 8670.917: 25.2472% ( 31) 00:07:39.151 8670.917 - 8721.329: 25.5142% ( 27) 00:07:39.151 8721.329 - 8771.742: 25.7812% ( 27) 00:07:39.151 8771.742 - 8822.154: 26.0384% ( 26) 00:07:39.151 8822.154 - 8872.566: 26.2263% ( 19) 00:07:39.151 8872.566 - 8922.978: 26.4933% ( 27) 00:07:39.151 8922.978 - 8973.391: 26.7998% ( 31) 00:07:39.151 8973.391 - 9023.803: 27.0273% ( 23) 00:07:39.151 9023.803 - 9074.215: 27.3438% ( 32) 00:07:39.151 9074.215 - 9124.628: 27.5910% ( 25) 00:07:39.151 9124.628 - 9175.040: 27.8382% ( 25) 00:07:39.151 9175.040 - 9225.452: 28.1052% ( 27) 00:07:39.151 9225.452 - 9275.865: 28.3228% ( 22) 00:07:39.151 9275.865 - 9326.277: 28.5502% ( 23) 00:07:39.151 9326.277 - 9376.689: 28.8074% ( 26) 00:07:39.151 9376.689 - 9427.102: 29.0051% ( 20) 00:07:39.151 9427.102 - 9477.514: 29.1733% ( 17) 00:07:39.151 9477.514 - 9527.926: 29.3513% ( 18) 00:07:39.151 9527.926 - 9578.338: 29.5194% ( 17) 00:07:39.151 9578.338 - 9628.751: 29.6677% ( 15) 00:07:39.151 9628.751 - 9679.163: 29.8062% ( 14) 00:07:39.151 9679.163 - 9729.575: 29.9446% ( 14) 00:07:39.151 9729.575 - 9779.988: 30.1028% ( 16) 00:07:39.151 9779.988 - 9830.400: 30.2116% ( 11) 00:07:39.151 9830.400 - 9880.812: 30.4094% ( 20) 00:07:39.151 9880.812 - 9931.225: 30.5182% ( 11) 00:07:39.151 9931.225 - 9981.637: 30.6369% ( 12) 00:07:39.151 9981.637 - 10032.049: 30.7358% ( 10) 00:07:39.151 10032.049 - 10082.462: 30.8841% ( 15) 00:07:39.151 10082.462 - 10132.874: 30.9632% ( 8) 00:07:39.151 10132.874 - 10183.286: 31.1214% ( 16) 00:07:39.151 10183.286 - 10233.698: 31.2302% ( 11) 00:07:39.151 10233.698 - 10284.111: 31.3291% ( 10) 00:07:39.151 10284.111 - 10334.523: 31.4577% ( 13) 00:07:39.151 10334.523 - 10384.935: 31.6653% ( 21) 00:07:39.151 10384.935 - 10435.348: 31.7840% ( 12) 00:07:39.151 10435.348 - 10485.760: 32.0115% ( 23) 00:07:39.151 10485.760 - 10536.172: 32.2785% ( 27) 00:07:39.151 10536.172 - 10586.585: 32.5554% ( 28) 00:07:39.151 10586.585 - 10636.997: 32.7037% ( 15) 00:07:39.151 10636.997 - 10687.409: 32.9213% ( 22) 00:07:39.151 10687.409 - 10737.822: 33.1487% ( 23) 00:07:39.151 10737.822 - 10788.234: 33.4553% ( 31) 00:07:39.151 10788.234 - 10838.646: 33.6828% ( 23) 00:07:39.151 10838.646 - 10889.058: 33.8608% ( 18) 00:07:39.151 10889.058 - 10939.471: 34.1970% ( 34) 00:07:39.151 10939.471 - 10989.883: 34.6717% ( 48) 00:07:39.151 10989.883 - 11040.295: 34.9782% ( 31) 00:07:39.151 11040.295 - 11090.708: 35.2848% ( 31) 00:07:39.151 11090.708 - 11141.120: 35.6013% ( 32) 00:07:39.151 11141.120 - 11191.532: 36.1353% ( 54) 00:07:39.151 11191.532 - 11241.945: 36.4221% ( 29) 00:07:39.151 11241.945 - 11292.357: 36.8176% ( 40) 00:07:39.151 11292.357 - 11342.769: 37.1638% ( 35) 00:07:39.151 11342.769 - 11393.182: 37.6286% ( 47) 00:07:39.151 11393.182 - 11443.594: 38.0439% ( 42) 00:07:39.151 11443.594 - 11494.006: 38.5779% ( 54) 00:07:39.151 11494.006 - 11544.418: 38.9438% ( 37) 00:07:39.151 11544.418 - 11594.831: 39.2603% ( 32) 00:07:39.151 11594.831 - 11645.243: 39.8240% ( 57) 00:07:39.151 11645.243 - 11695.655: 40.1800% ( 36) 00:07:39.151 11695.655 - 11746.068: 40.6151% ( 44) 00:07:39.151 11746.068 - 11796.480: 41.0107% ( 40) 00:07:39.151 11796.480 - 11846.892: 41.3964% ( 39) 00:07:39.151 11846.892 - 11897.305: 41.8710% ( 48) 00:07:39.151 11897.305 - 11947.717: 42.2073% ( 34) 00:07:39.151 11947.717 - 11998.129: 42.6325% ( 43) 00:07:39.151 11998.129 - 12048.542: 43.0775% ( 45) 00:07:39.151 12048.542 - 12098.954: 43.4138% ( 34) 00:07:39.151 12098.954 - 12149.366: 43.8884% ( 48) 00:07:39.151 12149.366 - 12199.778: 44.2642% ( 38) 00:07:39.151 12199.778 - 12250.191: 44.8477% ( 59) 00:07:39.151 12250.191 - 12300.603: 45.2631% ( 42) 00:07:39.151 12300.603 - 12351.015: 45.7971% ( 54) 00:07:39.151 12351.015 - 12401.428: 46.1729% ( 38) 00:07:39.151 12401.428 - 12451.840: 46.6574% ( 49) 00:07:39.151 12451.840 - 12502.252: 47.1420% ( 49) 00:07:39.151 12502.252 - 12552.665: 47.5574% ( 42) 00:07:39.151 12552.665 - 12603.077: 47.9529% ( 40) 00:07:39.151 12603.077 - 12653.489: 48.4276% ( 48) 00:07:39.151 12653.489 - 12703.902: 48.9122% ( 49) 00:07:39.151 12703.902 - 12754.314: 49.3473% ( 44) 00:07:39.151 12754.314 - 12804.726: 49.7528% ( 41) 00:07:39.151 12804.726 - 12855.138: 50.1681% ( 42) 00:07:39.151 12855.138 - 12905.551: 50.6131% ( 45) 00:07:39.151 12905.551 - 13006.375: 51.5724% ( 97) 00:07:39.151 13006.375 - 13107.200: 52.5514% ( 99) 00:07:39.151 13107.200 - 13208.025: 53.3623% ( 82) 00:07:39.151 13208.025 - 13308.849: 54.6183% ( 127) 00:07:39.151 13308.849 - 13409.674: 55.5182% ( 91) 00:07:39.151 13409.674 - 13510.498: 56.5269% ( 102) 00:07:39.151 13510.498 - 13611.323: 57.3972% ( 88) 00:07:39.151 13611.323 - 13712.148: 58.2081% ( 82) 00:07:39.151 13712.148 - 13812.972: 58.9300% ( 73) 00:07:39.151 13812.972 - 13913.797: 59.8794% ( 96) 00:07:39.151 13913.797 - 14014.622: 60.6408% ( 77) 00:07:39.151 14014.622 - 14115.446: 61.5111% ( 88) 00:07:39.151 14115.446 - 14216.271: 62.1539% ( 65) 00:07:39.151 14216.271 - 14317.095: 62.7670% ( 62) 00:07:39.151 14317.095 - 14417.920: 63.3801% ( 62) 00:07:39.151 14417.920 - 14518.745: 63.9241% ( 55) 00:07:39.151 14518.745 - 14619.569: 64.5767% ( 66) 00:07:39.151 14619.569 - 14720.394: 65.0613% ( 49) 00:07:39.151 14720.394 - 14821.218: 65.5558% ( 50) 00:07:39.151 14821.218 - 14922.043: 66.0502% ( 50) 00:07:39.151 14922.043 - 15022.868: 66.5150% ( 47) 00:07:39.151 15022.868 - 15123.692: 67.2666% ( 76) 00:07:39.151 15123.692 - 15224.517: 67.8797% ( 62) 00:07:39.151 15224.517 - 15325.342: 68.3940% ( 52) 00:07:39.151 15325.342 - 15426.166: 68.9972% ( 61) 00:07:39.151 15426.166 - 15526.991: 69.7191% ( 73) 00:07:39.151 15526.991 - 15627.815: 70.5498% ( 84) 00:07:39.151 15627.815 - 15728.640: 71.3311% ( 79) 00:07:39.151 15728.640 - 15829.465: 72.1025% ( 78) 00:07:39.151 15829.465 - 15930.289: 72.8343% ( 74) 00:07:39.151 15930.289 - 16031.114: 73.6155% ( 79) 00:07:39.151 16031.114 - 16131.938: 74.3671% ( 76) 00:07:39.151 16131.938 - 16232.763: 75.0494% ( 69) 00:07:39.151 16232.763 - 16333.588: 75.7318% ( 69) 00:07:39.151 16333.588 - 16434.412: 76.5625% ( 84) 00:07:39.151 16434.412 - 16535.237: 77.2251% ( 67) 00:07:39.151 16535.237 - 16636.062: 77.9569% ( 74) 00:07:39.151 16636.062 - 16736.886: 78.8172% ( 87) 00:07:39.151 16736.886 - 16837.711: 79.5194% ( 71) 00:07:39.151 16837.711 - 16938.535: 80.3600% ( 85) 00:07:39.151 16938.535 - 17039.360: 81.1313% ( 78) 00:07:39.151 17039.360 - 17140.185: 81.8236% ( 70) 00:07:39.151 17140.185 - 17241.009: 82.6048% ( 79) 00:07:39.151 17241.009 - 17341.834: 83.7223% ( 113) 00:07:39.151 17341.834 - 17442.658: 84.5827% ( 87) 00:07:39.151 17442.658 - 17543.483: 85.3639% ( 79) 00:07:39.151 17543.483 - 17644.308: 86.6199% ( 127) 00:07:39.151 17644.308 - 17745.132: 87.8560% ( 125) 00:07:39.151 17745.132 - 17845.957: 88.8944% ( 105) 00:07:39.151 17845.957 - 17946.782: 90.0119% ( 113) 00:07:39.151 17946.782 - 18047.606: 90.9711% ( 97) 00:07:39.151 18047.606 - 18148.431: 91.9205% ( 96) 00:07:39.151 18148.431 - 18249.255: 92.8995% ( 99) 00:07:39.151 18249.255 - 18350.080: 93.5720% ( 68) 00:07:39.151 18350.080 - 18450.905: 94.6104% ( 105) 00:07:39.151 18450.905 - 18551.729: 95.2532% ( 65) 00:07:39.151 18551.729 - 18652.554: 95.7872% ( 54) 00:07:39.151 18652.554 - 18753.378: 96.5487% ( 77) 00:07:39.151 18753.378 - 18854.203: 96.9442% ( 40) 00:07:39.151 18854.203 - 18955.028: 97.3398% ( 40) 00:07:39.151 18955.028 - 19055.852: 97.5376% ( 20) 00:07:39.151 19055.852 - 19156.677: 97.7848% ( 25) 00:07:39.151 19156.677 - 19257.502: 97.9331% ( 15) 00:07:39.151 19257.502 - 19358.326: 98.1408% ( 21) 00:07:39.151 19358.326 - 19459.151: 98.2694% ( 13) 00:07:39.151 19459.151 - 19559.975: 98.4276% ( 16) 00:07:39.151 19559.975 - 19660.800: 98.4474% ( 2) 00:07:39.151 19660.800 - 19761.625: 98.5661% ( 12) 00:07:39.151 19761.625 - 19862.449: 98.5957% ( 3) 00:07:39.151 19862.449 - 19963.274: 98.6155% ( 2) 00:07:39.151 19963.274 - 20064.098: 98.6452% ( 3) 00:07:39.151 20064.098 - 20164.923: 98.7342% ( 9) 00:07:39.151 26617.698 - 26819.348: 98.7441% ( 1) 00:07:39.151 26819.348 - 27020.997: 98.8034% ( 6) 00:07:39.151 27020.997 - 27222.646: 98.8726% ( 7) 00:07:39.151 27222.646 - 27424.295: 98.9517% ( 8) 00:07:39.152 27424.295 - 27625.945: 99.0210% ( 7) 00:07:39.152 27625.945 - 27827.594: 99.1001% ( 8) 00:07:39.152 27827.594 - 28029.243: 99.1594% ( 6) 00:07:39.152 28029.243 - 28230.892: 99.2286% ( 7) 00:07:39.152 28230.892 - 28432.542: 99.3078% ( 8) 00:07:39.152 28432.542 - 28634.191: 99.3671% ( 6) 00:07:39.152 36095.212 - 36296.862: 99.3770% ( 1) 00:07:39.152 36296.862 - 36498.511: 99.4561% ( 8) 00:07:39.152 36498.511 - 36700.160: 99.5451% ( 9) 00:07:39.152 36700.160 - 36901.809: 99.5945% ( 5) 00:07:39.152 36901.809 - 37103.458: 99.6835% ( 9) 00:07:39.152 37103.458 - 37305.108: 99.7528% ( 7) 00:07:39.152 37305.108 - 37506.757: 99.8220% ( 7) 00:07:39.152 37506.757 - 37708.406: 99.8813% ( 6) 00:07:39.152 37708.406 - 37910.055: 99.9802% ( 10) 00:07:39.152 37910.055 - 38111.705: 100.0000% ( 2) 00:07:39.152 00:07:39.152 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:39.152 ============================================================================== 00:07:39.152 Range in us Cumulative IO count 00:07:39.152 5343.705 - 5368.911: 0.0198% ( 2) 00:07:39.152 5368.911 - 5394.117: 0.0494% ( 3) 00:07:39.152 5394.117 - 5419.323: 0.0692% ( 2) 00:07:39.152 5419.323 - 5444.529: 0.0791% ( 1) 00:07:39.152 5444.529 - 5469.735: 0.1088% ( 3) 00:07:39.152 5469.735 - 5494.942: 0.1879% ( 8) 00:07:39.152 5494.942 - 5520.148: 0.2571% ( 7) 00:07:39.152 5520.148 - 5545.354: 0.3362% ( 8) 00:07:39.152 5545.354 - 5570.560: 0.4549% ( 12) 00:07:39.152 5570.560 - 5595.766: 0.6725% ( 22) 00:07:39.152 5595.766 - 5620.972: 0.8505% ( 18) 00:07:39.152 5620.972 - 5646.178: 1.0087% ( 16) 00:07:39.152 5646.178 - 5671.385: 1.2263% ( 22) 00:07:39.152 5671.385 - 5696.591: 1.3845% ( 16) 00:07:39.152 5696.591 - 5721.797: 1.6021% ( 22) 00:07:39.152 5721.797 - 5747.003: 1.7900% ( 19) 00:07:39.152 5747.003 - 5772.209: 2.0570% ( 27) 00:07:39.152 5772.209 - 5797.415: 2.2943% ( 24) 00:07:39.152 5797.415 - 5822.622: 2.6602% ( 37) 00:07:39.152 5822.622 - 5847.828: 3.0162% ( 36) 00:07:39.152 5847.828 - 5873.034: 3.4019% ( 39) 00:07:39.152 5873.034 - 5898.240: 3.8271% ( 43) 00:07:39.152 5898.240 - 5923.446: 4.2524% ( 43) 00:07:39.152 5923.446 - 5948.652: 4.6677% ( 42) 00:07:39.152 5948.652 - 5973.858: 5.0831% ( 42) 00:07:39.152 5973.858 - 5999.065: 5.5380% ( 46) 00:07:39.152 5999.065 - 6024.271: 6.0127% ( 48) 00:07:39.152 6024.271 - 6049.477: 6.4873% ( 48) 00:07:39.152 6049.477 - 6074.683: 6.9620% ( 48) 00:07:39.152 6074.683 - 6099.889: 7.5059% ( 55) 00:07:39.152 6099.889 - 6125.095: 7.9905% ( 49) 00:07:39.152 6125.095 - 6150.302: 8.4949% ( 51) 00:07:39.152 6150.302 - 6175.508: 8.9794% ( 49) 00:07:39.152 6175.508 - 6200.714: 9.4640% ( 49) 00:07:39.152 6200.714 - 6225.920: 9.9288% ( 47) 00:07:39.152 6225.920 - 6251.126: 10.3936% ( 47) 00:07:39.152 6251.126 - 6276.332: 10.8188% ( 43) 00:07:39.152 6276.332 - 6301.538: 11.2935% ( 48) 00:07:39.152 6301.538 - 6326.745: 11.6891% ( 40) 00:07:39.152 6326.745 - 6351.951: 12.0847% ( 40) 00:07:39.152 6351.951 - 6377.157: 12.4802% ( 40) 00:07:39.152 6377.157 - 6402.363: 12.8461% ( 37) 00:07:39.152 6402.363 - 6427.569: 13.2417% ( 40) 00:07:39.152 6427.569 - 6452.775: 13.5581% ( 32) 00:07:39.152 6452.775 - 6503.188: 14.2306% ( 68) 00:07:39.152 6503.188 - 6553.600: 14.8438% ( 62) 00:07:39.152 6553.600 - 6604.012: 15.4964% ( 66) 00:07:39.152 6604.012 - 6654.425: 15.9909% ( 50) 00:07:39.152 6654.425 - 6704.837: 16.3074% ( 32) 00:07:39.152 6704.837 - 6755.249: 16.5150% ( 21) 00:07:39.152 6755.249 - 6805.662: 16.8117% ( 30) 00:07:39.152 6805.662 - 6856.074: 17.0985% ( 29) 00:07:39.152 6856.074 - 6906.486: 17.2864% ( 19) 00:07:39.152 6906.486 - 6956.898: 17.4941% ( 21) 00:07:39.152 6956.898 - 7007.311: 17.7116% ( 22) 00:07:39.152 7007.311 - 7057.723: 17.8797% ( 17) 00:07:39.152 7057.723 - 7108.135: 18.0775% ( 20) 00:07:39.152 7108.135 - 7158.548: 18.2358% ( 16) 00:07:39.152 7158.548 - 7208.960: 18.4138% ( 18) 00:07:39.152 7208.960 - 7259.372: 18.5918% ( 18) 00:07:39.152 7259.372 - 7309.785: 18.7401% ( 15) 00:07:39.152 7309.785 - 7360.197: 18.8884% ( 15) 00:07:39.152 7360.197 - 7410.609: 19.0566% ( 17) 00:07:39.152 7410.609 - 7461.022: 19.2445% ( 19) 00:07:39.152 7461.022 - 7511.434: 19.3928% ( 15) 00:07:39.152 7511.434 - 7561.846: 19.5609% ( 17) 00:07:39.152 7561.846 - 7612.258: 19.7290% ( 17) 00:07:39.152 7612.258 - 7662.671: 19.8972% ( 17) 00:07:39.152 7662.671 - 7713.083: 20.0653% ( 17) 00:07:39.152 7713.083 - 7763.495: 20.3026% ( 24) 00:07:39.152 7763.495 - 7813.908: 20.5004% ( 20) 00:07:39.152 7813.908 - 7864.320: 20.7180% ( 22) 00:07:39.152 7864.320 - 7914.732: 20.9553% ( 24) 00:07:39.152 7914.732 - 7965.145: 21.1234% ( 17) 00:07:39.152 7965.145 - 8015.557: 21.3014% ( 18) 00:07:39.152 8015.557 - 8065.969: 21.4794% ( 18) 00:07:39.152 8065.969 - 8116.382: 21.6772% ( 20) 00:07:39.152 8116.382 - 8166.794: 21.8750% ( 20) 00:07:39.152 8166.794 - 8217.206: 22.1222% ( 25) 00:07:39.152 8217.206 - 8267.618: 22.3991% ( 28) 00:07:39.152 8267.618 - 8318.031: 22.6661% ( 27) 00:07:39.152 8318.031 - 8368.443: 22.9233% ( 26) 00:07:39.152 8368.443 - 8418.855: 23.1606% ( 24) 00:07:39.152 8418.855 - 8469.268: 23.4276% ( 27) 00:07:39.152 8469.268 - 8519.680: 23.7045% ( 28) 00:07:39.152 8519.680 - 8570.092: 23.9616% ( 26) 00:07:39.152 8570.092 - 8620.505: 24.2484% ( 29) 00:07:39.152 8620.505 - 8670.917: 24.5451% ( 30) 00:07:39.152 8670.917 - 8721.329: 24.8220% ( 28) 00:07:39.152 8721.329 - 8771.742: 25.1384% ( 32) 00:07:39.152 8771.742 - 8822.154: 25.4055% ( 27) 00:07:39.152 8822.154 - 8872.566: 25.6725% ( 27) 00:07:39.152 8872.566 - 8922.978: 25.8999% ( 23) 00:07:39.152 8922.978 - 8973.391: 26.1274% ( 23) 00:07:39.152 8973.391 - 9023.803: 26.3845% ( 26) 00:07:39.152 9023.803 - 9074.215: 26.6218% ( 24) 00:07:39.152 9074.215 - 9124.628: 26.8691% ( 25) 00:07:39.152 9124.628 - 9175.040: 27.0471% ( 18) 00:07:39.152 9175.040 - 9225.452: 27.2547% ( 21) 00:07:39.152 9225.452 - 9275.865: 27.4426% ( 19) 00:07:39.152 9275.865 - 9326.277: 27.6206% ( 18) 00:07:39.152 9326.277 - 9376.689: 27.8085% ( 19) 00:07:39.152 9376.689 - 9427.102: 28.0459% ( 24) 00:07:39.152 9427.102 - 9477.514: 28.2832% ( 24) 00:07:39.152 9477.514 - 9527.926: 28.5206% ( 24) 00:07:39.152 9527.926 - 9578.338: 28.7282% ( 21) 00:07:39.152 9578.338 - 9628.751: 29.0348% ( 31) 00:07:39.152 9628.751 - 9679.163: 29.2919% ( 26) 00:07:39.152 9679.163 - 9729.575: 29.5589% ( 27) 00:07:39.152 9729.575 - 9779.988: 29.8161% ( 26) 00:07:39.152 9779.988 - 9830.400: 30.0534% ( 24) 00:07:39.152 9830.400 - 9880.812: 30.2809% ( 23) 00:07:39.152 9880.812 - 9931.225: 30.4786% ( 20) 00:07:39.152 9931.225 - 9981.637: 30.7259% ( 25) 00:07:39.152 9981.637 - 10032.049: 30.9632% ( 24) 00:07:39.152 10032.049 - 10082.462: 31.1511% ( 19) 00:07:39.152 10082.462 - 10132.874: 31.3588% ( 21) 00:07:39.152 10132.874 - 10183.286: 31.5566% ( 20) 00:07:39.152 10183.286 - 10233.698: 31.7346% ( 18) 00:07:39.152 10233.698 - 10284.111: 31.9225% ( 19) 00:07:39.152 10284.111 - 10334.523: 32.1104% ( 19) 00:07:39.152 10334.523 - 10384.935: 32.2686% ( 16) 00:07:39.152 10384.935 - 10435.348: 32.4268% ( 16) 00:07:39.152 10435.348 - 10485.760: 32.6147% ( 19) 00:07:39.152 10485.760 - 10536.172: 32.7927% ( 18) 00:07:39.152 10536.172 - 10586.585: 33.0696% ( 28) 00:07:39.152 10586.585 - 10636.997: 33.2674% ( 20) 00:07:39.152 10636.997 - 10687.409: 33.5146% ( 25) 00:07:39.152 10687.409 - 10737.822: 33.7816% ( 27) 00:07:39.152 10737.822 - 10788.234: 34.0388% ( 26) 00:07:39.152 10788.234 - 10838.646: 34.3157% ( 28) 00:07:39.152 10838.646 - 10889.058: 34.6519% ( 34) 00:07:39.152 10889.058 - 10939.471: 35.0277% ( 38) 00:07:39.152 10939.471 - 10989.883: 35.3244% ( 30) 00:07:39.152 10989.883 - 11040.295: 35.6013% ( 28) 00:07:39.152 11040.295 - 11090.708: 35.9177% ( 32) 00:07:39.152 11090.708 - 11141.120: 36.2045% ( 29) 00:07:39.152 11141.120 - 11191.532: 36.5407% ( 34) 00:07:39.152 11191.532 - 11241.945: 36.8374% ( 30) 00:07:39.152 11241.945 - 11292.357: 37.1934% ( 36) 00:07:39.152 11292.357 - 11342.769: 37.5198% ( 33) 00:07:39.152 11342.769 - 11393.182: 37.8461% ( 33) 00:07:39.152 11393.182 - 11443.594: 38.1329% ( 29) 00:07:39.152 11443.594 - 11494.006: 38.4197% ( 29) 00:07:39.152 11494.006 - 11544.418: 38.6966% ( 28) 00:07:39.152 11544.418 - 11594.831: 39.0131% ( 32) 00:07:39.152 11594.831 - 11645.243: 39.2998% ( 29) 00:07:39.152 11645.243 - 11695.655: 39.6855% ( 39) 00:07:39.152 11695.655 - 11746.068: 40.0119% ( 33) 00:07:39.152 11746.068 - 11796.480: 40.4173% ( 41) 00:07:39.152 11796.480 - 11846.892: 40.7832% ( 37) 00:07:39.152 11846.892 - 11897.305: 41.1590% ( 38) 00:07:39.152 11897.305 - 11947.717: 41.5546% ( 40) 00:07:39.152 11947.717 - 11998.129: 41.9699% ( 42) 00:07:39.152 11998.129 - 12048.542: 42.4446% ( 48) 00:07:39.152 12048.542 - 12098.954: 42.9589% ( 52) 00:07:39.152 12098.954 - 12149.366: 43.4335% ( 48) 00:07:39.152 12149.366 - 12199.778: 43.9577% ( 53) 00:07:39.153 12199.778 - 12250.191: 44.4027% ( 45) 00:07:39.153 12250.191 - 12300.603: 44.9367% ( 54) 00:07:39.153 12300.603 - 12351.015: 45.4411% ( 51) 00:07:39.153 12351.015 - 12401.428: 45.8465% ( 41) 00:07:39.153 12401.428 - 12451.840: 46.4201% ( 58) 00:07:39.153 12451.840 - 12502.252: 46.9739% ( 56) 00:07:39.153 12502.252 - 12552.665: 47.5475% ( 58) 00:07:39.153 12552.665 - 12603.077: 48.0716% ( 53) 00:07:39.153 12603.077 - 12653.489: 48.5661% ( 50) 00:07:39.153 12653.489 - 12703.902: 49.0506% ( 49) 00:07:39.153 12703.902 - 12754.314: 49.5451% ( 50) 00:07:39.153 12754.314 - 12804.726: 50.0297% ( 49) 00:07:39.153 12804.726 - 12855.138: 50.5934% ( 57) 00:07:39.153 12855.138 - 12905.551: 51.0977% ( 51) 00:07:39.153 12905.551 - 13006.375: 52.0866% ( 100) 00:07:39.153 13006.375 - 13107.200: 52.9964% ( 92) 00:07:39.153 13107.200 - 13208.025: 53.9359% ( 95) 00:07:39.153 13208.025 - 13308.849: 55.1226% ( 120) 00:07:39.153 13308.849 - 13409.674: 56.1709% ( 106) 00:07:39.153 13409.674 - 13510.498: 57.0312% ( 87) 00:07:39.153 13510.498 - 13611.323: 57.9509% ( 93) 00:07:39.153 13611.323 - 13712.148: 58.7816% ( 84) 00:07:39.153 13712.148 - 13812.972: 59.5134% ( 74) 00:07:39.153 13812.972 - 13913.797: 60.2848% ( 78) 00:07:39.153 13913.797 - 14014.622: 61.0166% ( 74) 00:07:39.153 14014.622 - 14115.446: 61.7286% ( 72) 00:07:39.153 14115.446 - 14216.271: 62.3022% ( 58) 00:07:39.153 14216.271 - 14317.095: 62.8461% ( 55) 00:07:39.153 14317.095 - 14417.920: 63.3307% ( 49) 00:07:39.153 14417.920 - 14518.745: 63.7856% ( 46) 00:07:39.153 14518.745 - 14619.569: 64.2504% ( 47) 00:07:39.153 14619.569 - 14720.394: 64.9426% ( 70) 00:07:39.153 14720.394 - 14821.218: 65.5756% ( 64) 00:07:39.153 14821.218 - 14922.043: 66.1986% ( 63) 00:07:39.153 14922.043 - 15022.868: 66.9600% ( 77) 00:07:39.153 15022.868 - 15123.692: 67.8797% ( 93) 00:07:39.153 15123.692 - 15224.517: 68.5522% ( 68) 00:07:39.153 15224.517 - 15325.342: 69.2840% ( 74) 00:07:39.153 15325.342 - 15426.166: 70.1741% ( 90) 00:07:39.153 15426.166 - 15526.991: 70.8169% ( 65) 00:07:39.153 15526.991 - 15627.815: 71.5388% ( 73) 00:07:39.153 15627.815 - 15728.640: 72.1519% ( 62) 00:07:39.153 15728.640 - 15829.465: 72.7453% ( 60) 00:07:39.153 15829.465 - 15930.289: 73.3089% ( 57) 00:07:39.153 15930.289 - 16031.114: 73.9122% ( 61) 00:07:39.153 16031.114 - 16131.938: 74.5847% ( 68) 00:07:39.153 16131.938 - 16232.763: 75.3263% ( 75) 00:07:39.153 16232.763 - 16333.588: 76.0779% ( 76) 00:07:39.153 16333.588 - 16434.412: 76.9680% ( 90) 00:07:39.153 16434.412 - 16535.237: 77.8085% ( 85) 00:07:39.153 16535.237 - 16636.062: 78.8074% ( 101) 00:07:39.153 16636.062 - 16736.886: 79.6677% ( 87) 00:07:39.153 16736.886 - 16837.711: 80.4490% ( 79) 00:07:39.153 16837.711 - 16938.535: 81.1907% ( 75) 00:07:39.153 16938.535 - 17039.360: 82.0214% ( 84) 00:07:39.153 17039.360 - 17140.185: 82.8224% ( 81) 00:07:39.153 17140.185 - 17241.009: 83.5245% ( 71) 00:07:39.153 17241.009 - 17341.834: 84.1475% ( 63) 00:07:39.153 17341.834 - 17442.658: 84.8101% ( 67) 00:07:39.153 17442.658 - 17543.483: 85.5024% ( 70) 00:07:39.153 17543.483 - 17644.308: 86.2638% ( 77) 00:07:39.153 17644.308 - 17745.132: 87.1737% ( 92) 00:07:39.153 17745.132 - 17845.957: 88.0142% ( 85) 00:07:39.153 17845.957 - 17946.782: 88.8153% ( 81) 00:07:39.153 17946.782 - 18047.606: 89.7251% ( 92) 00:07:39.153 18047.606 - 18148.431: 90.6646% ( 95) 00:07:39.153 18148.431 - 18249.255: 91.6930% ( 104) 00:07:39.153 18249.255 - 18350.080: 92.4743% ( 79) 00:07:39.153 18350.080 - 18450.905: 93.2259% ( 76) 00:07:39.153 18450.905 - 18551.729: 94.0665% ( 85) 00:07:39.153 18551.729 - 18652.554: 94.8378% ( 78) 00:07:39.153 18652.554 - 18753.378: 95.3916% ( 56) 00:07:39.153 18753.378 - 18854.203: 95.8861% ( 50) 00:07:39.153 18854.203 - 18955.028: 96.4695% ( 59) 00:07:39.153 18955.028 - 19055.852: 97.0134% ( 55) 00:07:39.153 19055.852 - 19156.677: 97.3794% ( 37) 00:07:39.153 19156.677 - 19257.502: 97.6958% ( 32) 00:07:39.153 19257.502 - 19358.326: 98.0024% ( 31) 00:07:39.153 19358.326 - 19459.151: 98.2496% ( 25) 00:07:39.153 19459.151 - 19559.975: 98.4177% ( 17) 00:07:39.153 19559.975 - 19660.800: 98.5463% ( 13) 00:07:39.153 19660.800 - 19761.625: 98.6847% ( 14) 00:07:39.153 19761.625 - 19862.449: 98.7045% ( 2) 00:07:39.153 19862.449 - 19963.274: 98.7342% ( 3) 00:07:39.153 25811.102 - 26012.751: 98.7836% ( 5) 00:07:39.153 26012.751 - 26214.400: 98.8528% ( 7) 00:07:39.153 26214.400 - 26416.049: 98.9419% ( 9) 00:07:39.153 26416.049 - 26617.698: 99.0111% ( 7) 00:07:39.153 26617.698 - 26819.348: 99.0902% ( 8) 00:07:39.153 26819.348 - 27020.997: 99.1594% ( 7) 00:07:39.153 27020.997 - 27222.646: 99.2385% ( 8) 00:07:39.153 27222.646 - 27424.295: 99.3275% ( 9) 00:07:39.153 27424.295 - 27625.945: 99.3671% ( 4) 00:07:39.153 34482.018 - 34683.668: 99.3869% ( 2) 00:07:39.153 34683.668 - 34885.317: 99.4660% ( 8) 00:07:39.153 34885.317 - 35086.966: 99.5451% ( 8) 00:07:39.153 35086.966 - 35288.615: 99.6341% ( 9) 00:07:39.153 35288.615 - 35490.265: 99.7132% ( 8) 00:07:39.153 35490.265 - 35691.914: 99.7923% ( 8) 00:07:39.153 35691.914 - 35893.563: 99.8714% ( 8) 00:07:39.153 35893.563 - 36095.212: 99.9506% ( 8) 00:07:39.153 36095.212 - 36296.862: 100.0000% ( 5) 00:07:39.153 00:07:39.153 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:39.153 ============================================================================== 00:07:39.153 Range in us Cumulative IO count 00:07:39.153 5444.529 - 5469.735: 0.0297% ( 3) 00:07:39.153 5469.735 - 5494.942: 0.0494% ( 2) 00:07:39.153 5494.942 - 5520.148: 0.0692% ( 2) 00:07:39.153 5520.148 - 5545.354: 0.1286% ( 6) 00:07:39.153 5545.354 - 5570.560: 0.2571% ( 13) 00:07:39.153 5570.560 - 5595.766: 0.4153% ( 16) 00:07:39.153 5595.766 - 5620.972: 0.5934% ( 18) 00:07:39.153 5620.972 - 5646.178: 0.7516% ( 16) 00:07:39.153 5646.178 - 5671.385: 0.9197% ( 17) 00:07:39.153 5671.385 - 5696.591: 1.0779% ( 16) 00:07:39.153 5696.591 - 5721.797: 1.3054% ( 23) 00:07:39.153 5721.797 - 5747.003: 1.4933% ( 19) 00:07:39.153 5747.003 - 5772.209: 1.7108% ( 22) 00:07:39.153 5772.209 - 5797.415: 2.0767% ( 37) 00:07:39.153 5797.415 - 5822.622: 2.4624% ( 39) 00:07:39.153 5822.622 - 5847.828: 2.8975% ( 44) 00:07:39.153 5847.828 - 5873.034: 3.2832% ( 39) 00:07:39.153 5873.034 - 5898.240: 3.5997% ( 32) 00:07:39.153 5898.240 - 5923.446: 4.0150% ( 42) 00:07:39.153 5923.446 - 5948.652: 4.5392% ( 53) 00:07:39.153 5948.652 - 5973.858: 5.0534% ( 52) 00:07:39.153 5973.858 - 5999.065: 5.4688% ( 42) 00:07:39.153 5999.065 - 6024.271: 5.9335% ( 47) 00:07:39.153 6024.271 - 6049.477: 6.3786% ( 45) 00:07:39.153 6049.477 - 6074.683: 6.8631% ( 49) 00:07:39.153 6074.683 - 6099.889: 7.4070% ( 55) 00:07:39.153 6099.889 - 6125.095: 7.9213% ( 52) 00:07:39.153 6125.095 - 6150.302: 8.4355% ( 52) 00:07:39.153 6150.302 - 6175.508: 8.9399% ( 51) 00:07:39.153 6175.508 - 6200.714: 9.4146% ( 48) 00:07:39.153 6200.714 - 6225.920: 9.8892% ( 48) 00:07:39.153 6225.920 - 6251.126: 10.3441% ( 46) 00:07:39.153 6251.126 - 6276.332: 10.8584% ( 52) 00:07:39.153 6276.332 - 6301.538: 11.3133% ( 46) 00:07:39.153 6301.538 - 6326.745: 11.7484% ( 44) 00:07:39.153 6326.745 - 6351.951: 12.1934% ( 45) 00:07:39.153 6351.951 - 6377.157: 12.6088% ( 42) 00:07:39.153 6377.157 - 6402.363: 13.0044% ( 40) 00:07:39.153 6402.363 - 6427.569: 13.3900% ( 39) 00:07:39.153 6427.569 - 6452.775: 13.7460% ( 36) 00:07:39.153 6452.775 - 6503.188: 14.4680% ( 73) 00:07:39.153 6503.188 - 6553.600: 15.1701% ( 71) 00:07:39.153 6553.600 - 6604.012: 15.7041% ( 54) 00:07:39.153 6604.012 - 6654.425: 16.2184% ( 52) 00:07:39.153 6654.425 - 6704.837: 16.6436% ( 43) 00:07:39.153 6704.837 - 6755.249: 16.9798% ( 34) 00:07:39.153 6755.249 - 6805.662: 17.3161% ( 34) 00:07:39.153 6805.662 - 6856.074: 17.6226% ( 31) 00:07:39.153 6856.074 - 6906.486: 17.8600% ( 24) 00:07:39.153 6906.486 - 6956.898: 18.0973% ( 24) 00:07:39.153 6956.898 - 7007.311: 18.3643% ( 27) 00:07:39.153 7007.311 - 7057.723: 18.5720% ( 21) 00:07:39.153 7057.723 - 7108.135: 18.7797% ( 21) 00:07:39.153 7108.135 - 7158.548: 18.9577% ( 18) 00:07:39.153 7158.548 - 7208.960: 19.1357% ( 18) 00:07:39.153 7208.960 - 7259.372: 19.3137% ( 18) 00:07:39.153 7259.372 - 7309.785: 19.4620% ( 15) 00:07:39.153 7309.785 - 7360.197: 19.5906% ( 13) 00:07:39.153 7360.197 - 7410.609: 19.6796% ( 9) 00:07:39.153 7410.609 - 7461.022: 19.7587% ( 8) 00:07:39.153 7461.022 - 7511.434: 19.8972% ( 14) 00:07:39.153 7511.434 - 7561.846: 20.0653% ( 17) 00:07:39.153 7561.846 - 7612.258: 20.2334% ( 17) 00:07:39.153 7612.258 - 7662.671: 20.3718% ( 14) 00:07:39.153 7662.671 - 7713.083: 20.4509% ( 8) 00:07:39.153 7713.083 - 7763.495: 20.5498% ( 10) 00:07:39.153 7763.495 - 7813.908: 20.6685% ( 12) 00:07:39.153 7813.908 - 7864.320: 20.7674% ( 10) 00:07:39.153 7864.320 - 7914.732: 20.9059% ( 14) 00:07:39.153 7914.732 - 7965.145: 21.0047% ( 10) 00:07:39.153 7965.145 - 8015.557: 21.1432% ( 14) 00:07:39.153 8015.557 - 8065.969: 21.3410% ( 20) 00:07:39.153 8065.969 - 8116.382: 21.4695% ( 13) 00:07:39.153 8116.382 - 8166.794: 21.6278% ( 16) 00:07:39.153 8166.794 - 8217.206: 21.8256% ( 20) 00:07:39.153 8217.206 - 8267.618: 22.0332% ( 21) 00:07:39.153 8267.618 - 8318.031: 22.2409% ( 21) 00:07:39.153 8318.031 - 8368.443: 22.4585% ( 22) 00:07:39.153 8368.443 - 8418.855: 22.6859% ( 23) 00:07:39.154 8418.855 - 8469.268: 22.9134% ( 23) 00:07:39.154 8469.268 - 8519.680: 23.1507% ( 24) 00:07:39.154 8519.680 - 8570.092: 23.3782% ( 23) 00:07:39.154 8570.092 - 8620.505: 23.6847% ( 31) 00:07:39.154 8620.505 - 8670.917: 23.9320% ( 25) 00:07:39.154 8670.917 - 8721.329: 24.2979% ( 37) 00:07:39.154 8721.329 - 8771.742: 24.6143% ( 32) 00:07:39.154 8771.742 - 8822.154: 24.9901% ( 38) 00:07:39.154 8822.154 - 8872.566: 25.3857% ( 40) 00:07:39.154 8872.566 - 8922.978: 25.7714% ( 39) 00:07:39.154 8922.978 - 8973.391: 26.1472% ( 38) 00:07:39.154 8973.391 - 9023.803: 26.5427% ( 40) 00:07:39.154 9023.803 - 9074.215: 26.9482% ( 41) 00:07:39.154 9074.215 - 9124.628: 27.3438% ( 40) 00:07:39.154 9124.628 - 9175.040: 27.7195% ( 38) 00:07:39.154 9175.040 - 9225.452: 28.0657% ( 35) 00:07:39.154 9225.452 - 9275.865: 28.4217% ( 36) 00:07:39.154 9275.865 - 9326.277: 28.7381% ( 32) 00:07:39.154 9326.277 - 9376.689: 29.0348% ( 30) 00:07:39.154 9376.689 - 9427.102: 29.3414% ( 31) 00:07:39.154 9427.102 - 9477.514: 29.6677% ( 33) 00:07:39.154 9477.514 - 9527.926: 30.0040% ( 34) 00:07:39.154 9527.926 - 9578.338: 30.3600% ( 36) 00:07:39.154 9578.338 - 9628.751: 30.6270% ( 27) 00:07:39.154 9628.751 - 9679.163: 30.8544% ( 23) 00:07:39.154 9679.163 - 9729.575: 31.0423% ( 19) 00:07:39.154 9729.575 - 9779.988: 31.2401% ( 20) 00:07:39.154 9779.988 - 9830.400: 31.4676% ( 23) 00:07:39.154 9830.400 - 9880.812: 31.6851% ( 22) 00:07:39.154 9880.812 - 9931.225: 31.8730% ( 19) 00:07:39.154 9931.225 - 9981.637: 32.0312% ( 16) 00:07:39.154 9981.637 - 10032.049: 32.1598% ( 13) 00:07:39.154 10032.049 - 10082.462: 32.2785% ( 12) 00:07:39.154 10082.462 - 10132.874: 32.4466% ( 17) 00:07:39.154 10132.874 - 10183.286: 32.6048% ( 16) 00:07:39.154 10183.286 - 10233.698: 32.7927% ( 19) 00:07:39.154 10233.698 - 10284.111: 32.9707% ( 18) 00:07:39.154 10284.111 - 10334.523: 33.1586% ( 19) 00:07:39.154 10334.523 - 10384.935: 33.3663% ( 21) 00:07:39.154 10384.935 - 10435.348: 33.5740% ( 21) 00:07:39.154 10435.348 - 10485.760: 33.7619% ( 19) 00:07:39.154 10485.760 - 10536.172: 34.0091% ( 25) 00:07:39.154 10536.172 - 10586.585: 34.1871% ( 18) 00:07:39.154 10586.585 - 10636.997: 34.3849% ( 20) 00:07:39.154 10636.997 - 10687.409: 34.6025% ( 22) 00:07:39.154 10687.409 - 10737.822: 34.7805% ( 18) 00:07:39.154 10737.822 - 10788.234: 34.9486% ( 17) 00:07:39.154 10788.234 - 10838.646: 35.1167% ( 17) 00:07:39.154 10838.646 - 10889.058: 35.2848% ( 17) 00:07:39.154 10889.058 - 10939.471: 35.4628% ( 18) 00:07:39.154 10939.471 - 10989.883: 35.6309% ( 17) 00:07:39.154 10989.883 - 11040.295: 35.8089% ( 18) 00:07:39.154 11040.295 - 11090.708: 35.9968% ( 19) 00:07:39.154 11090.708 - 11141.120: 36.1748% ( 18) 00:07:39.154 11141.120 - 11191.532: 36.3232% ( 15) 00:07:39.154 11191.532 - 11241.945: 36.4913% ( 17) 00:07:39.154 11241.945 - 11292.357: 36.6792% ( 19) 00:07:39.154 11292.357 - 11342.769: 36.9066% ( 23) 00:07:39.154 11342.769 - 11393.182: 37.1539% ( 25) 00:07:39.154 11393.182 - 11443.594: 37.3418% ( 19) 00:07:39.154 11443.594 - 11494.006: 37.6483% ( 31) 00:07:39.154 11494.006 - 11544.418: 37.9450% ( 30) 00:07:39.154 11544.418 - 11594.831: 38.2516% ( 31) 00:07:39.154 11594.831 - 11645.243: 38.5285% ( 28) 00:07:39.154 11645.243 - 11695.655: 38.8054% ( 28) 00:07:39.154 11695.655 - 11746.068: 39.1416% ( 34) 00:07:39.154 11746.068 - 11796.480: 39.5471% ( 41) 00:07:39.154 11796.480 - 11846.892: 39.8734% ( 33) 00:07:39.154 11846.892 - 11897.305: 40.2097% ( 34) 00:07:39.154 11897.305 - 11947.717: 40.5558% ( 35) 00:07:39.154 11947.717 - 11998.129: 40.9217% ( 37) 00:07:39.154 11998.129 - 12048.542: 41.3964% ( 48) 00:07:39.154 12048.542 - 12098.954: 41.8117% ( 42) 00:07:39.154 12098.954 - 12149.366: 42.2567% ( 45) 00:07:39.154 12149.366 - 12199.778: 42.7710% ( 52) 00:07:39.154 12199.778 - 12250.191: 43.1962% ( 43) 00:07:39.154 12250.191 - 12300.603: 43.7401% ( 55) 00:07:39.154 12300.603 - 12351.015: 44.2939% ( 56) 00:07:39.154 12351.015 - 12401.428: 44.7884% ( 50) 00:07:39.154 12401.428 - 12451.840: 45.3026% ( 52) 00:07:39.154 12451.840 - 12502.252: 45.7278% ( 43) 00:07:39.154 12502.252 - 12552.665: 46.1926% ( 47) 00:07:39.154 12552.665 - 12603.077: 46.6772% ( 49) 00:07:39.154 12603.077 - 12653.489: 47.2805% ( 61) 00:07:39.154 12653.489 - 12703.902: 47.8837% ( 61) 00:07:39.154 12703.902 - 12754.314: 48.5166% ( 64) 00:07:39.154 12754.314 - 12804.726: 49.1495% ( 64) 00:07:39.154 12804.726 - 12855.138: 49.6835% ( 54) 00:07:39.154 12855.138 - 12905.551: 50.2176% ( 54) 00:07:39.154 12905.551 - 13006.375: 51.3350% ( 113) 00:07:39.154 13006.375 - 13107.200: 52.6602% ( 134) 00:07:39.154 13107.200 - 13208.025: 53.8865% ( 124) 00:07:39.154 13208.025 - 13308.849: 55.1325% ( 126) 00:07:39.154 13308.849 - 13409.674: 56.3786% ( 126) 00:07:39.154 13409.674 - 13510.498: 57.5158% ( 115) 00:07:39.154 13510.498 - 13611.323: 58.4751% ( 97) 00:07:39.154 13611.323 - 13712.148: 59.1871% ( 72) 00:07:39.154 13712.148 - 13812.972: 59.8398% ( 66) 00:07:39.154 13812.972 - 13913.797: 60.5024% ( 67) 00:07:39.154 13913.797 - 14014.622: 61.0759% ( 58) 00:07:39.154 14014.622 - 14115.446: 61.5506% ( 48) 00:07:39.154 14115.446 - 14216.271: 62.0154% ( 47) 00:07:39.154 14216.271 - 14317.095: 62.5989% ( 59) 00:07:39.154 14317.095 - 14417.920: 63.1329% ( 54) 00:07:39.154 14417.920 - 14518.745: 63.8054% ( 68) 00:07:39.154 14518.745 - 14619.569: 64.4086% ( 61) 00:07:39.154 14619.569 - 14720.394: 65.0415% ( 64) 00:07:39.154 14720.394 - 14821.218: 65.5953% ( 56) 00:07:39.154 14821.218 - 14922.043: 66.1788% ( 59) 00:07:39.154 14922.043 - 15022.868: 66.7919% ( 62) 00:07:39.154 15022.868 - 15123.692: 67.6127% ( 83) 00:07:39.154 15123.692 - 15224.517: 68.3544% ( 75) 00:07:39.154 15224.517 - 15325.342: 69.0763% ( 73) 00:07:39.154 15325.342 - 15426.166: 69.9664% ( 90) 00:07:39.154 15426.166 - 15526.991: 70.8267% ( 87) 00:07:39.154 15526.991 - 15627.815: 71.6574% ( 84) 00:07:39.154 15627.815 - 15728.640: 72.4782% ( 83) 00:07:39.154 15728.640 - 15829.465: 73.2397% ( 77) 00:07:39.154 15829.465 - 15930.289: 74.0605% ( 83) 00:07:39.154 15930.289 - 16031.114: 74.9209% ( 87) 00:07:39.154 16031.114 - 16131.938: 75.7911% ( 88) 00:07:39.154 16131.938 - 16232.763: 76.5922% ( 81) 00:07:39.154 16232.763 - 16333.588: 77.6305% ( 105) 00:07:39.154 16333.588 - 16434.412: 78.5403% ( 92) 00:07:39.154 16434.412 - 16535.237: 79.4798% ( 95) 00:07:39.154 16535.237 - 16636.062: 80.3105% ( 84) 00:07:39.154 16636.062 - 16736.886: 81.1412% ( 84) 00:07:39.154 16736.886 - 16837.711: 82.0115% ( 88) 00:07:39.154 16837.711 - 16938.535: 82.8026% ( 80) 00:07:39.154 16938.535 - 17039.360: 83.5740% ( 78) 00:07:39.154 17039.360 - 17140.185: 84.2860% ( 72) 00:07:39.154 17140.185 - 17241.009: 84.8596% ( 58) 00:07:39.154 17241.009 - 17341.834: 85.2947% ( 44) 00:07:39.154 17341.834 - 17442.658: 85.6903% ( 40) 00:07:39.154 17442.658 - 17543.483: 86.1946% ( 51) 00:07:39.154 17543.483 - 17644.308: 86.7781% ( 59) 00:07:39.154 17644.308 - 17745.132: 87.3220% ( 55) 00:07:39.154 17745.132 - 17845.957: 87.8758% ( 56) 00:07:39.154 17845.957 - 17946.782: 88.5878% ( 72) 00:07:39.154 17946.782 - 18047.606: 89.3691% ( 79) 00:07:39.154 18047.606 - 18148.431: 90.1305% ( 77) 00:07:39.154 18148.431 - 18249.255: 90.8920% ( 77) 00:07:39.154 18249.255 - 18350.080: 91.6337% ( 75) 00:07:39.154 18350.080 - 18450.905: 92.3853% ( 76) 00:07:39.154 18450.905 - 18551.729: 93.1369% ( 76) 00:07:39.154 18551.729 - 18652.554: 93.7599% ( 63) 00:07:39.154 18652.554 - 18753.378: 94.3532% ( 60) 00:07:39.154 18753.378 - 18854.203: 94.8774% ( 53) 00:07:39.154 18854.203 - 18955.028: 95.5301% ( 66) 00:07:39.154 18955.028 - 19055.852: 96.0938% ( 57) 00:07:39.154 19055.852 - 19156.677: 96.6970% ( 61) 00:07:39.154 19156.677 - 19257.502: 97.2013% ( 51) 00:07:39.154 19257.502 - 19358.326: 97.5969% ( 40) 00:07:39.154 19358.326 - 19459.151: 97.9134% ( 32) 00:07:39.154 19459.151 - 19559.975: 98.1606% ( 25) 00:07:39.154 19559.975 - 19660.800: 98.3485% ( 19) 00:07:39.154 19660.800 - 19761.625: 98.5166% ( 17) 00:07:39.154 19761.625 - 19862.449: 98.6452% ( 13) 00:07:39.154 19862.449 - 19963.274: 98.7342% ( 9) 00:07:39.154 26416.049 - 26617.698: 98.7441% ( 1) 00:07:39.154 26617.698 - 26819.348: 98.7836% ( 4) 00:07:39.154 26819.348 - 27020.997: 98.8331% ( 5) 00:07:39.154 27020.997 - 27222.646: 98.8825% ( 5) 00:07:39.154 27222.646 - 27424.295: 98.9320% ( 5) 00:07:39.154 27424.295 - 27625.945: 99.0111% ( 8) 00:07:39.154 27625.945 - 27827.594: 99.1001% ( 9) 00:07:39.154 27827.594 - 28029.243: 99.1792% ( 8) 00:07:39.154 28029.243 - 28230.892: 99.2682% ( 9) 00:07:39.154 28230.892 - 28432.542: 99.3473% ( 8) 00:07:39.154 28432.542 - 28634.191: 99.3671% ( 2) 00:07:39.154 33070.474 - 33272.123: 99.3968% ( 3) 00:07:39.154 33272.123 - 33473.772: 99.4660% ( 7) 00:07:39.154 33473.772 - 33675.422: 99.5352% ( 7) 00:07:39.154 33675.422 - 33877.071: 99.6143% ( 8) 00:07:39.154 33877.071 - 34078.720: 99.6934% ( 8) 00:07:39.154 34078.720 - 34280.369: 99.7725% ( 8) 00:07:39.154 34280.369 - 34482.018: 99.8517% ( 8) 00:07:39.154 34482.018 - 34683.668: 99.9308% ( 8) 00:07:39.154 34683.668 - 34885.317: 100.0000% ( 7) 00:07:39.154 00:07:39.154 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:39.154 ============================================================================== 00:07:39.154 Range in us Cumulative IO count 00:07:39.154 5469.735 - 5494.942: 0.0098% ( 1) 00:07:39.154 5494.942 - 5520.148: 0.0590% ( 5) 00:07:39.154 5520.148 - 5545.354: 0.1081% ( 5) 00:07:39.155 5545.354 - 5570.560: 0.2260% ( 12) 00:07:39.155 5570.560 - 5595.766: 0.3636% ( 14) 00:07:39.155 5595.766 - 5620.972: 0.5208% ( 16) 00:07:39.155 5620.972 - 5646.178: 0.6584% ( 14) 00:07:39.155 5646.178 - 5671.385: 0.8156% ( 16) 00:07:39.155 5671.385 - 5696.591: 1.0122% ( 20) 00:07:39.155 5696.591 - 5721.797: 1.2775% ( 27) 00:07:39.155 5721.797 - 5747.003: 1.5330% ( 26) 00:07:39.155 5747.003 - 5772.209: 1.8475% ( 32) 00:07:39.155 5772.209 - 5797.415: 2.1718% ( 33) 00:07:39.155 5797.415 - 5822.622: 2.5747% ( 41) 00:07:39.155 5822.622 - 5847.828: 2.9383% ( 37) 00:07:39.155 5847.828 - 5873.034: 3.3117% ( 38) 00:07:39.155 5873.034 - 5898.240: 3.6753% ( 37) 00:07:39.155 5898.240 - 5923.446: 4.0881% ( 42) 00:07:39.155 5923.446 - 5948.652: 4.4811% ( 40) 00:07:39.155 5948.652 - 5973.858: 4.8840% ( 41) 00:07:39.155 5973.858 - 5999.065: 5.3361% ( 46) 00:07:39.155 5999.065 - 6024.271: 5.7980% ( 47) 00:07:39.155 6024.271 - 6049.477: 6.2205% ( 43) 00:07:39.155 6049.477 - 6074.683: 6.6922% ( 48) 00:07:39.155 6074.683 - 6099.889: 7.1934% ( 51) 00:07:39.155 6099.889 - 6125.095: 7.6651% ( 48) 00:07:39.155 6125.095 - 6150.302: 8.1859% ( 53) 00:07:39.155 6150.302 - 6175.508: 8.6773% ( 50) 00:07:39.155 6175.508 - 6200.714: 9.1686% ( 50) 00:07:39.155 6200.714 - 6225.920: 9.6403% ( 48) 00:07:39.155 6225.920 - 6251.126: 10.0629% ( 43) 00:07:39.155 6251.126 - 6276.332: 10.5641% ( 51) 00:07:39.155 6276.332 - 6301.538: 10.9866% ( 43) 00:07:39.155 6301.538 - 6326.745: 11.3895% ( 41) 00:07:39.155 6326.745 - 6351.951: 11.7728% ( 39) 00:07:39.155 6351.951 - 6377.157: 12.1561% ( 39) 00:07:39.155 6377.157 - 6402.363: 12.5393% ( 39) 00:07:39.155 6402.363 - 6427.569: 12.9127% ( 38) 00:07:39.155 6427.569 - 6452.775: 13.2763% ( 37) 00:07:39.155 6452.775 - 6503.188: 13.9446% ( 68) 00:07:39.155 6503.188 - 6553.600: 14.5833% ( 65) 00:07:39.155 6553.600 - 6604.012: 15.1042% ( 53) 00:07:39.155 6604.012 - 6654.425: 15.5759% ( 48) 00:07:39.155 6654.425 - 6704.837: 15.9002% ( 33) 00:07:39.155 6704.837 - 6755.249: 16.3325% ( 44) 00:07:39.155 6755.249 - 6805.662: 16.7256% ( 40) 00:07:39.155 6805.662 - 6856.074: 17.0303% ( 31) 00:07:39.155 6856.074 - 6906.486: 17.3447% ( 32) 00:07:39.155 6906.486 - 6956.898: 17.5904% ( 25) 00:07:39.155 6956.898 - 7007.311: 17.8656% ( 28) 00:07:39.155 7007.311 - 7057.723: 18.1309% ( 27) 00:07:39.155 7057.723 - 7108.135: 18.4061% ( 28) 00:07:39.155 7108.135 - 7158.548: 18.6419% ( 24) 00:07:39.155 7158.548 - 7208.960: 18.8483% ( 21) 00:07:39.155 7208.960 - 7259.372: 19.0153% ( 17) 00:07:39.155 7259.372 - 7309.785: 19.1431% ( 13) 00:07:39.155 7309.785 - 7360.197: 19.2708% ( 13) 00:07:39.155 7360.197 - 7410.609: 19.4281% ( 16) 00:07:39.155 7410.609 - 7461.022: 19.6148% ( 19) 00:07:39.155 7461.022 - 7511.434: 19.7622% ( 15) 00:07:39.155 7511.434 - 7561.846: 19.9292% ( 17) 00:07:39.155 7561.846 - 7612.258: 20.0472% ( 12) 00:07:39.155 7612.258 - 7662.671: 20.2142% ( 17) 00:07:39.155 7662.671 - 7713.083: 20.4108% ( 20) 00:07:39.155 7713.083 - 7763.495: 20.5680% ( 16) 00:07:39.155 7763.495 - 7813.908: 20.7351% ( 17) 00:07:39.155 7813.908 - 7864.320: 20.9021% ( 17) 00:07:39.155 7864.320 - 7914.732: 21.0397% ( 14) 00:07:39.155 7914.732 - 7965.145: 21.1773% ( 14) 00:07:39.155 7965.145 - 8015.557: 21.2952% ( 12) 00:07:39.155 8015.557 - 8065.969: 21.4033% ( 11) 00:07:39.155 8065.969 - 8116.382: 21.5311% ( 13) 00:07:39.155 8116.382 - 8166.794: 21.6588% ( 13) 00:07:39.155 8166.794 - 8217.206: 21.8062% ( 15) 00:07:39.155 8217.206 - 8267.618: 21.9634% ( 16) 00:07:39.155 8267.618 - 8318.031: 22.1010% ( 14) 00:07:39.155 8318.031 - 8368.443: 22.2484% ( 15) 00:07:39.155 8368.443 - 8418.855: 22.4253% ( 18) 00:07:39.155 8418.855 - 8469.268: 22.6219% ( 20) 00:07:39.155 8469.268 - 8519.680: 22.8282% ( 21) 00:07:39.155 8519.680 - 8570.092: 23.0346% ( 21) 00:07:39.155 8570.092 - 8620.505: 23.2901% ( 26) 00:07:39.155 8620.505 - 8670.917: 23.5456% ( 26) 00:07:39.155 8670.917 - 8721.329: 23.8109% ( 27) 00:07:39.155 8721.329 - 8771.742: 24.1057% ( 30) 00:07:39.155 8771.742 - 8822.154: 24.4104% ( 31) 00:07:39.155 8822.154 - 8872.566: 24.6659% ( 26) 00:07:39.155 8872.566 - 8922.978: 24.9902% ( 33) 00:07:39.155 8922.978 - 8973.391: 25.3341% ( 35) 00:07:39.155 8973.391 - 9023.803: 25.7174% ( 39) 00:07:39.155 9023.803 - 9074.215: 26.1301% ( 42) 00:07:39.155 9074.215 - 9124.628: 26.5330% ( 41) 00:07:39.155 9124.628 - 9175.040: 26.9556% ( 43) 00:07:39.155 9175.040 - 9225.452: 27.2799% ( 33) 00:07:39.155 9225.452 - 9275.865: 27.5747% ( 30) 00:07:39.155 9275.865 - 9326.277: 27.9678% ( 40) 00:07:39.155 9326.277 - 9376.689: 28.3510% ( 39) 00:07:39.155 9376.689 - 9427.102: 28.7244% ( 38) 00:07:39.155 9427.102 - 9477.514: 29.1470% ( 43) 00:07:39.155 9477.514 - 9527.926: 29.5401% ( 40) 00:07:39.155 9527.926 - 9578.338: 29.9037% ( 37) 00:07:39.155 9578.338 - 9628.751: 30.1985% ( 30) 00:07:39.155 9628.751 - 9679.163: 30.5031% ( 31) 00:07:39.155 9679.163 - 9729.575: 30.8078% ( 31) 00:07:39.155 9729.575 - 9779.988: 31.1222% ( 32) 00:07:39.155 9779.988 - 9830.400: 31.4564% ( 34) 00:07:39.155 9830.400 - 9880.812: 31.7119% ( 26) 00:07:39.155 9880.812 - 9931.225: 31.9969% ( 29) 00:07:39.155 9931.225 - 9981.637: 32.2818% ( 29) 00:07:39.155 9981.637 - 10032.049: 32.4980% ( 22) 00:07:39.155 10032.049 - 10082.462: 32.7437% ( 25) 00:07:39.155 10082.462 - 10132.874: 32.9992% ( 26) 00:07:39.155 10132.874 - 10183.286: 33.3235% ( 33) 00:07:39.155 10183.286 - 10233.698: 33.6085% ( 29) 00:07:39.155 10233.698 - 10284.111: 33.8345% ( 23) 00:07:39.155 10284.111 - 10334.523: 34.0311% ( 20) 00:07:39.155 10334.523 - 10384.935: 34.2178% ( 19) 00:07:39.155 10384.935 - 10435.348: 34.4045% ( 19) 00:07:39.155 10435.348 - 10485.760: 34.5814% ( 18) 00:07:39.155 10485.760 - 10536.172: 34.7288% ( 15) 00:07:39.155 10536.172 - 10586.585: 34.8664% ( 14) 00:07:39.155 10586.585 - 10636.997: 35.0138% ( 15) 00:07:39.155 10636.997 - 10687.409: 35.1120% ( 10) 00:07:39.155 10687.409 - 10737.822: 35.2398% ( 13) 00:07:39.155 10737.822 - 10788.234: 35.3774% ( 14) 00:07:39.155 10788.234 - 10838.646: 35.5542% ( 18) 00:07:39.155 10838.646 - 10889.058: 35.6820% ( 13) 00:07:39.155 10889.058 - 10939.471: 35.8491% ( 17) 00:07:39.155 10939.471 - 10989.883: 35.9572% ( 11) 00:07:39.155 10989.883 - 11040.295: 36.0849% ( 13) 00:07:39.155 11040.295 - 11090.708: 36.2520% ( 17) 00:07:39.155 11090.708 - 11141.120: 36.3994% ( 15) 00:07:39.155 11141.120 - 11191.532: 36.5566% ( 16) 00:07:39.155 11191.532 - 11241.945: 36.7138% ( 16) 00:07:39.155 11241.945 - 11292.357: 36.9300% ( 22) 00:07:39.155 11292.357 - 11342.769: 37.1462% ( 22) 00:07:39.155 11342.769 - 11393.182: 37.3722% ( 23) 00:07:39.155 11393.182 - 11443.594: 37.6572% ( 29) 00:07:39.155 11443.594 - 11494.006: 37.9520% ( 30) 00:07:39.155 11494.006 - 11544.418: 38.2370% ( 29) 00:07:39.155 11544.418 - 11594.831: 38.4925% ( 26) 00:07:39.155 11594.831 - 11645.243: 38.7873% ( 30) 00:07:39.155 11645.243 - 11695.655: 39.0527% ( 27) 00:07:39.155 11695.655 - 11746.068: 39.3377% ( 29) 00:07:39.155 11746.068 - 11796.480: 39.7111% ( 38) 00:07:39.155 11796.480 - 11846.892: 40.0845% ( 38) 00:07:39.155 11846.892 - 11897.305: 40.5071% ( 43) 00:07:39.155 11897.305 - 11947.717: 40.8412% ( 34) 00:07:39.155 11947.717 - 11998.129: 41.2244% ( 39) 00:07:39.155 11998.129 - 12048.542: 41.5979% ( 38) 00:07:39.155 12048.542 - 12098.954: 41.9418% ( 35) 00:07:39.155 12098.954 - 12149.366: 42.2858% ( 35) 00:07:39.155 12149.366 - 12199.778: 42.6690% ( 39) 00:07:39.155 12199.778 - 12250.191: 42.9737% ( 31) 00:07:39.155 12250.191 - 12300.603: 43.3766% ( 41) 00:07:39.155 12300.603 - 12351.015: 43.7303% ( 36) 00:07:39.155 12351.015 - 12401.428: 44.0743% ( 35) 00:07:39.155 12401.428 - 12451.840: 44.5558% ( 49) 00:07:39.155 12451.840 - 12502.252: 44.9784% ( 43) 00:07:39.155 12502.252 - 12552.665: 45.3813% ( 41) 00:07:39.155 12552.665 - 12603.077: 45.8137% ( 44) 00:07:39.155 12603.077 - 12653.489: 46.3836% ( 58) 00:07:39.155 12653.489 - 12703.902: 46.9438% ( 57) 00:07:39.155 12703.902 - 12754.314: 47.4843% ( 55) 00:07:39.155 12754.314 - 12804.726: 48.1034% ( 63) 00:07:39.155 12804.726 - 12855.138: 48.6930% ( 60) 00:07:39.155 12855.138 - 12905.551: 49.3612% ( 68) 00:07:39.155 12905.551 - 13006.375: 50.6289% ( 129) 00:07:39.155 13006.375 - 13107.200: 51.8966% ( 129) 00:07:39.155 13107.200 - 13208.025: 53.3903% ( 152) 00:07:39.155 13208.025 - 13308.849: 54.8742% ( 151) 00:07:39.155 13308.849 - 13409.674: 56.0535% ( 120) 00:07:39.155 13409.674 - 13510.498: 57.1737% ( 114) 00:07:39.155 13510.498 - 13611.323: 58.2744% ( 112) 00:07:39.155 13611.323 - 13712.148: 59.1785% ( 92) 00:07:39.155 13712.148 - 13812.972: 60.0138% ( 85) 00:07:39.155 13812.972 - 13913.797: 60.8097% ( 81) 00:07:39.155 13913.797 - 14014.622: 61.4976% ( 70) 00:07:39.155 14014.622 - 14115.446: 62.2248% ( 74) 00:07:39.155 14115.446 - 14216.271: 62.7358% ( 52) 00:07:39.155 14216.271 - 14317.095: 63.1682% ( 44) 00:07:39.155 14317.095 - 14417.920: 63.8365% ( 68) 00:07:39.155 14417.920 - 14518.745: 64.2787% ( 45) 00:07:39.155 14518.745 - 14619.569: 64.7013% ( 43) 00:07:39.155 14619.569 - 14720.394: 65.0649% ( 37) 00:07:39.156 14720.394 - 14821.218: 65.4579% ( 40) 00:07:39.156 14821.218 - 14922.043: 65.8314% ( 38) 00:07:39.156 14922.043 - 15022.868: 66.4112% ( 59) 00:07:39.156 15022.868 - 15123.692: 67.1285% ( 73) 00:07:39.156 15123.692 - 15224.517: 67.8066% ( 69) 00:07:39.156 15224.517 - 15325.342: 68.5240% ( 73) 00:07:39.156 15325.342 - 15426.166: 69.4477% ( 94) 00:07:39.156 15426.166 - 15526.991: 70.3322% ( 90) 00:07:39.156 15526.991 - 15627.815: 71.2559% ( 94) 00:07:39.156 15627.815 - 15728.640: 72.2091% ( 97) 00:07:39.156 15728.640 - 15829.465: 73.1427% ( 95) 00:07:39.156 15829.465 - 15930.289: 74.0664% ( 94) 00:07:39.156 15930.289 - 16031.114: 75.0000% ( 95) 00:07:39.156 16031.114 - 16131.938: 75.8844% ( 90) 00:07:39.156 16131.938 - 16232.763: 76.7983% ( 93) 00:07:39.156 16232.763 - 16333.588: 77.7811% ( 100) 00:07:39.156 16333.588 - 16434.412: 78.5869% ( 82) 00:07:39.156 16434.412 - 16535.237: 79.3534% ( 78) 00:07:39.156 16535.237 - 16636.062: 80.1887% ( 85) 00:07:39.156 16636.062 - 16736.886: 80.8962% ( 72) 00:07:39.156 16736.886 - 16837.711: 81.6627% ( 78) 00:07:39.156 16837.711 - 16938.535: 82.4784% ( 83) 00:07:39.156 16938.535 - 17039.360: 83.2547% ( 79) 00:07:39.156 17039.360 - 17140.185: 83.9033% ( 66) 00:07:39.156 17140.185 - 17241.009: 84.6010% ( 71) 00:07:39.156 17241.009 - 17341.834: 85.4855% ( 90) 00:07:39.156 17341.834 - 17442.658: 86.2323% ( 76) 00:07:39.156 17442.658 - 17543.483: 86.9006% ( 68) 00:07:39.156 17543.483 - 17644.308: 87.5688% ( 68) 00:07:39.156 17644.308 - 17745.132: 88.3550% ( 80) 00:07:39.156 17745.132 - 17845.957: 89.0822% ( 74) 00:07:39.156 17845.957 - 17946.782: 89.8290% ( 76) 00:07:39.156 17946.782 - 18047.606: 90.6348% ( 82) 00:07:39.156 18047.606 - 18148.431: 91.5782% ( 96) 00:07:39.156 18148.431 - 18249.255: 92.4823% ( 92) 00:07:39.156 18249.255 - 18350.080: 93.3962% ( 93) 00:07:39.156 18350.080 - 18450.905: 94.1333% ( 75) 00:07:39.156 18450.905 - 18551.729: 94.7818% ( 66) 00:07:39.156 18551.729 - 18652.554: 95.4403% ( 67) 00:07:39.156 18652.554 - 18753.378: 95.9414% ( 51) 00:07:39.156 18753.378 - 18854.203: 96.3738% ( 44) 00:07:39.156 18854.203 - 18955.028: 96.7472% ( 38) 00:07:39.156 18955.028 - 19055.852: 97.0126% ( 27) 00:07:39.156 19055.852 - 19156.677: 97.1993% ( 19) 00:07:39.156 19156.677 - 19257.502: 97.4351% ( 24) 00:07:39.156 19257.502 - 19358.326: 97.7201% ( 29) 00:07:39.156 19358.326 - 19459.151: 98.0149% ( 30) 00:07:39.156 19459.151 - 19559.975: 98.3196% ( 31) 00:07:39.156 19559.975 - 19660.800: 98.5554% ( 24) 00:07:39.156 19660.800 - 19761.625: 98.7225% ( 17) 00:07:39.156 19761.625 - 19862.449: 98.8895% ( 17) 00:07:39.156 19862.449 - 19963.274: 99.0468% ( 16) 00:07:39.156 19963.274 - 20064.098: 99.1942% ( 15) 00:07:39.156 20064.098 - 20164.923: 99.3121% ( 12) 00:07:39.156 20164.923 - 20265.748: 99.3711% ( 6) 00:07:39.156 26416.049 - 26617.698: 99.3907% ( 2) 00:07:39.156 26617.698 - 26819.348: 99.4693% ( 8) 00:07:39.156 26819.348 - 27020.997: 99.5480% ( 8) 00:07:39.156 27020.997 - 27222.646: 99.6266% ( 8) 00:07:39.156 27222.646 - 27424.295: 99.7150% ( 9) 00:07:39.156 27424.295 - 27625.945: 99.7936% ( 8) 00:07:39.156 27625.945 - 27827.594: 99.8722% ( 8) 00:07:39.156 27827.594 - 28029.243: 99.9607% ( 9) 00:07:39.156 28029.243 - 28230.892: 100.0000% ( 4) 00:07:39.156 00:07:39.156 14:44:24 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:40.534 Initializing NVMe Controllers 00:07:40.534 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:40.534 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:40.534 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:40.534 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:40.534 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:40.534 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:40.534 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:40.534 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:40.534 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:40.534 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:40.534 Initialization complete. Launching workers. 00:07:40.534 ======================================================== 00:07:40.534 Latency(us) 00:07:40.534 Device Information : IOPS MiB/s Average min max 00:07:40.534 PCIE (0000:00:11.0) NSID 1 from core 0: 11151.60 130.68 11501.22 6907.86 25739.31 00:07:40.534 PCIE (0000:00:13.0) NSID 1 from core 0: 11151.60 130.68 11490.74 6647.45 26417.45 00:07:40.534 PCIE (0000:00:10.0) NSID 1 from core 0: 11151.60 130.68 11477.73 6742.15 26053.37 00:07:40.534 PCIE (0000:00:12.0) NSID 1 from core 0: 11151.60 130.68 11464.87 6764.86 25388.95 00:07:40.534 PCIE (0000:00:12.0) NSID 2 from core 0: 11151.60 130.68 11452.60 6960.03 24831.92 00:07:40.534 PCIE (0000:00:12.0) NSID 3 from core 0: 11215.33 131.43 11375.75 6781.05 17469.43 00:07:40.534 ======================================================== 00:07:40.534 Total : 66973.34 784.84 11460.40 6647.45 26417.45 00:07:40.534 00:07:40.534 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:40.534 ================================================================================= 00:07:40.534 1.00000% : 7410.609us 00:07:40.534 10.00000% : 9124.628us 00:07:40.534 25.00000% : 10233.698us 00:07:40.534 50.00000% : 11292.357us 00:07:40.534 75.00000% : 12401.428us 00:07:40.534 90.00000% : 14216.271us 00:07:40.534 95.00000% : 15426.166us 00:07:40.534 98.00000% : 16333.588us 00:07:40.534 99.00000% : 22383.065us 00:07:40.534 99.50000% : 25609.452us 00:07:40.534 99.90000% : 25710.277us 00:07:40.534 99.99000% : 25811.102us 00:07:40.534 99.99900% : 25811.102us 00:07:40.534 99.99990% : 25811.102us 00:07:40.534 99.99999% : 25811.102us 00:07:40.534 00:07:40.534 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:40.534 ================================================================================= 00:07:40.534 1.00000% : 7309.785us 00:07:40.534 10.00000% : 9175.040us 00:07:40.534 25.00000% : 10284.111us 00:07:40.534 50.00000% : 11292.357us 00:07:40.534 75.00000% : 12401.428us 00:07:40.534 90.00000% : 14216.271us 00:07:40.534 95.00000% : 15526.991us 00:07:40.534 98.00000% : 16535.237us 00:07:40.534 99.00000% : 20366.572us 00:07:40.534 99.50000% : 24802.855us 00:07:40.534 99.90000% : 26214.400us 00:07:40.534 99.99000% : 26416.049us 00:07:40.534 99.99900% : 26617.698us 00:07:40.534 99.99990% : 26617.698us 00:07:40.534 99.99999% : 26617.698us 00:07:40.534 00:07:40.534 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:40.534 ================================================================================= 00:07:40.534 1.00000% : 7259.372us 00:07:40.534 10.00000% : 9124.628us 00:07:40.534 25.00000% : 10233.698us 00:07:40.534 50.00000% : 11292.357us 00:07:40.534 75.00000% : 12351.015us 00:07:40.534 90.00000% : 14115.446us 00:07:40.534 95.00000% : 15526.991us 00:07:40.534 98.00000% : 16736.886us 00:07:40.534 99.00000% : 18854.203us 00:07:40.534 99.50000% : 23996.258us 00:07:40.534 99.90000% : 25710.277us 00:07:40.534 99.99000% : 26012.751us 00:07:40.534 99.99900% : 26214.400us 00:07:40.534 99.99990% : 26214.400us 00:07:40.534 99.99999% : 26214.400us 00:07:40.534 00:07:40.534 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:40.534 ================================================================================= 00:07:40.534 1.00000% : 7208.960us 00:07:40.534 10.00000% : 9124.628us 00:07:40.534 25.00000% : 10284.111us 00:07:40.534 50.00000% : 11241.945us 00:07:40.534 75.00000% : 12351.015us 00:07:40.534 90.00000% : 14014.622us 00:07:40.534 95.00000% : 15728.640us 00:07:40.534 98.00000% : 16636.062us 00:07:40.534 99.00000% : 17442.658us 00:07:40.534 99.50000% : 23492.135us 00:07:40.534 99.90000% : 25105.329us 00:07:40.534 99.99000% : 25407.803us 00:07:40.534 99.99900% : 25407.803us 00:07:40.534 99.99990% : 25407.803us 00:07:40.534 99.99999% : 25407.803us 00:07:40.534 00:07:40.534 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:40.534 ================================================================================= 00:07:40.534 1.00000% : 7410.609us 00:07:40.534 10.00000% : 9124.628us 00:07:40.534 25.00000% : 10132.874us 00:07:40.534 50.00000% : 11292.357us 00:07:40.534 75.00000% : 12300.603us 00:07:40.534 90.00000% : 14115.446us 00:07:40.534 95.00000% : 15728.640us 00:07:40.534 98.00000% : 16636.062us 00:07:40.534 99.00000% : 17241.009us 00:07:40.534 99.50000% : 22988.012us 00:07:40.534 99.90000% : 24500.382us 00:07:40.534 99.99000% : 24802.855us 00:07:40.534 99.99900% : 24903.680us 00:07:40.534 99.99990% : 24903.680us 00:07:40.534 99.99999% : 24903.680us 00:07:40.534 00:07:40.534 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:40.534 ================================================================================= 00:07:40.534 1.00000% : 7410.609us 00:07:40.534 10.00000% : 9124.628us 00:07:40.534 25.00000% : 10284.111us 00:07:40.534 50.00000% : 11342.769us 00:07:40.534 75.00000% : 12300.603us 00:07:40.534 90.00000% : 13913.797us 00:07:40.534 95.00000% : 15325.342us 00:07:40.534 98.00000% : 16333.588us 00:07:40.534 99.00000% : 16837.711us 00:07:40.534 99.50000% : 17039.360us 00:07:40.534 99.90000% : 17341.834us 00:07:40.534 99.99000% : 17442.658us 00:07:40.534 99.99900% : 17543.483us 00:07:40.534 99.99990% : 17543.483us 00:07:40.534 99.99999% : 17543.483us 00:07:40.534 00:07:40.534 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:40.534 ============================================================================== 00:07:40.534 Range in us Cumulative IO count 00:07:40.534 6906.486 - 6956.898: 0.0536% ( 6) 00:07:40.534 6956.898 - 7007.311: 0.0982% ( 5) 00:07:40.534 7007.311 - 7057.723: 0.2679% ( 19) 00:07:40.534 7057.723 - 7108.135: 0.4554% ( 21) 00:07:40.534 7108.135 - 7158.548: 0.4821% ( 3) 00:07:40.534 7158.548 - 7208.960: 0.5089% ( 3) 00:07:40.534 7208.960 - 7259.372: 0.5536% ( 5) 00:07:40.534 7259.372 - 7309.785: 0.6250% ( 8) 00:07:40.534 7309.785 - 7360.197: 0.7679% ( 16) 00:07:40.534 7360.197 - 7410.609: 1.0893% ( 36) 00:07:40.534 7410.609 - 7461.022: 1.5089% ( 47) 00:07:40.534 7461.022 - 7511.434: 1.9911% ( 54) 00:07:40.534 7511.434 - 7561.846: 2.3482% ( 40) 00:07:40.534 7561.846 - 7612.258: 2.6071% ( 29) 00:07:40.534 7612.258 - 7662.671: 2.8125% ( 23) 00:07:40.534 7662.671 - 7713.083: 2.9732% ( 18) 00:07:40.534 7713.083 - 7763.495: 3.2857% ( 35) 00:07:40.534 7763.495 - 7813.908: 3.5804% ( 33) 00:07:40.534 7813.908 - 7864.320: 3.6964% ( 13) 00:07:40.534 7864.320 - 7914.732: 3.8036% ( 12) 00:07:40.534 7914.732 - 7965.145: 3.9107% ( 12) 00:07:40.534 7965.145 - 8015.557: 4.0536% ( 16) 00:07:40.534 8015.557 - 8065.969: 4.2232% ( 19) 00:07:40.534 8065.969 - 8116.382: 4.6518% ( 48) 00:07:40.534 8116.382 - 8166.794: 4.9107% ( 29) 00:07:40.534 8166.794 - 8217.206: 5.0893% ( 20) 00:07:40.534 8217.206 - 8267.618: 5.2232% ( 15) 00:07:40.534 8267.618 - 8318.031: 5.3482% ( 14) 00:07:40.534 8318.031 - 8368.443: 5.5089% ( 18) 00:07:40.534 8368.443 - 8418.855: 5.5982% ( 10) 00:07:40.534 8418.855 - 8469.268: 5.6518% ( 6) 00:07:40.534 8469.268 - 8519.680: 5.7321% ( 9) 00:07:40.534 8519.680 - 8570.092: 5.8929% ( 18) 00:07:40.534 8570.092 - 8620.505: 6.1339% ( 27) 00:07:40.534 8620.505 - 8670.917: 6.6250% ( 55) 00:07:40.534 8670.917 - 8721.329: 6.9554% ( 37) 00:07:40.534 8721.329 - 8771.742: 7.4732% ( 58) 00:07:40.534 8771.742 - 8822.154: 7.7768% ( 34) 00:07:40.535 8822.154 - 8872.566: 8.0893% ( 35) 00:07:40.535 8872.566 - 8922.978: 8.3839% ( 33) 00:07:40.535 8922.978 - 8973.391: 8.6429% ( 29) 00:07:40.535 8973.391 - 9023.803: 8.9286% ( 32) 00:07:40.535 9023.803 - 9074.215: 9.2857% ( 40) 00:07:40.535 9074.215 - 9124.628: 10.0357% ( 84) 00:07:40.535 9124.628 - 9175.040: 10.6339% ( 67) 00:07:40.535 9175.040 - 9225.452: 11.4107% ( 87) 00:07:40.535 9225.452 - 9275.865: 11.9821% ( 64) 00:07:40.535 9275.865 - 9326.277: 12.5893% ( 68) 00:07:40.535 9326.277 - 9376.689: 13.2232% ( 71) 00:07:40.535 9376.689 - 9427.102: 13.7857% ( 63) 00:07:40.535 9427.102 - 9477.514: 14.3304% ( 61) 00:07:40.535 9477.514 - 9527.926: 14.8214% ( 55) 00:07:40.535 9527.926 - 9578.338: 15.3304% ( 57) 00:07:40.535 9578.338 - 9628.751: 15.8036% ( 53) 00:07:40.535 9628.751 - 9679.163: 16.3661% ( 63) 00:07:40.535 9679.163 - 9729.575: 17.0179% ( 73) 00:07:40.535 9729.575 - 9779.988: 17.6875% ( 75) 00:07:40.535 9779.988 - 9830.400: 18.3304% ( 72) 00:07:40.535 9830.400 - 9880.812: 19.2321% ( 101) 00:07:40.535 9880.812 - 9931.225: 20.1964% ( 108) 00:07:40.535 9931.225 - 9981.637: 20.9554% ( 85) 00:07:40.535 9981.637 - 10032.049: 21.8393% ( 99) 00:07:40.535 10032.049 - 10082.462: 22.6429% ( 90) 00:07:40.535 10082.462 - 10132.874: 23.6518% ( 113) 00:07:40.535 10132.874 - 10183.286: 24.3571% ( 79) 00:07:40.535 10183.286 - 10233.698: 25.1339% ( 87) 00:07:40.535 10233.698 - 10284.111: 25.8304% ( 78) 00:07:40.535 10284.111 - 10334.523: 26.5982% ( 86) 00:07:40.535 10334.523 - 10384.935: 27.3482% ( 84) 00:07:40.535 10384.935 - 10435.348: 28.0268% ( 76) 00:07:40.535 10435.348 - 10485.760: 28.8214% ( 89) 00:07:40.535 10485.760 - 10536.172: 29.7054% ( 99) 00:07:40.535 10536.172 - 10586.585: 30.8482% ( 128) 00:07:40.535 10586.585 - 10636.997: 32.1161% ( 142) 00:07:40.535 10636.997 - 10687.409: 33.4018% ( 144) 00:07:40.535 10687.409 - 10737.822: 34.8571% ( 163) 00:07:40.535 10737.822 - 10788.234: 36.2857% ( 160) 00:07:40.535 10788.234 - 10838.646: 37.5804% ( 145) 00:07:40.535 10838.646 - 10889.058: 38.5893% ( 113) 00:07:40.535 10889.058 - 10939.471: 39.8125% ( 137) 00:07:40.535 10939.471 - 10989.883: 41.2232% ( 158) 00:07:40.535 10989.883 - 11040.295: 42.7054% ( 166) 00:07:40.535 11040.295 - 11090.708: 44.0357% ( 149) 00:07:40.535 11090.708 - 11141.120: 45.4018% ( 153) 00:07:40.535 11141.120 - 11191.532: 46.9911% ( 178) 00:07:40.535 11191.532 - 11241.945: 48.7321% ( 195) 00:07:40.535 11241.945 - 11292.357: 50.4375% ( 191) 00:07:40.535 11292.357 - 11342.769: 52.2946% ( 208) 00:07:40.535 11342.769 - 11393.182: 54.3036% ( 225) 00:07:40.535 11393.182 - 11443.594: 56.2411% ( 217) 00:07:40.535 11443.594 - 11494.006: 57.7411% ( 168) 00:07:40.535 11494.006 - 11544.418: 59.1429% ( 157) 00:07:40.535 11544.418 - 11594.831: 60.3482% ( 135) 00:07:40.535 11594.831 - 11645.243: 61.5893% ( 139) 00:07:40.535 11645.243 - 11695.655: 62.7857% ( 134) 00:07:40.535 11695.655 - 11746.068: 63.9821% ( 134) 00:07:40.535 11746.068 - 11796.480: 64.9821% ( 112) 00:07:40.535 11796.480 - 11846.892: 65.9732% ( 111) 00:07:40.535 11846.892 - 11897.305: 66.9196% ( 106) 00:07:40.535 11897.305 - 11947.717: 67.7768% ( 96) 00:07:40.535 11947.717 - 11998.129: 68.6786% ( 101) 00:07:40.535 11998.129 - 12048.542: 69.5357% ( 96) 00:07:40.535 12048.542 - 12098.954: 70.4821% ( 106) 00:07:40.535 12098.954 - 12149.366: 71.2857% ( 90) 00:07:40.535 12149.366 - 12199.778: 72.0804% ( 89) 00:07:40.535 12199.778 - 12250.191: 72.8125% ( 82) 00:07:40.535 12250.191 - 12300.603: 73.7054% ( 100) 00:07:40.535 12300.603 - 12351.015: 74.5179% ( 91) 00:07:40.535 12351.015 - 12401.428: 75.2946% ( 87) 00:07:40.535 12401.428 - 12451.840: 75.9911% ( 78) 00:07:40.535 12451.840 - 12502.252: 76.8125% ( 92) 00:07:40.535 12502.252 - 12552.665: 77.6250% ( 91) 00:07:40.535 12552.665 - 12603.077: 78.3125% ( 77) 00:07:40.535 12603.077 - 12653.489: 79.1250% ( 91) 00:07:40.535 12653.489 - 12703.902: 80.0179% ( 100) 00:07:40.535 12703.902 - 12754.314: 81.0536% ( 116) 00:07:40.535 12754.314 - 12804.726: 81.9821% ( 104) 00:07:40.535 12804.726 - 12855.138: 82.5536% ( 64) 00:07:40.535 12855.138 - 12905.551: 83.1250% ( 64) 00:07:40.535 12905.551 - 13006.375: 84.0714% ( 106) 00:07:40.535 13006.375 - 13107.200: 84.8750% ( 90) 00:07:40.535 13107.200 - 13208.025: 85.5268% ( 73) 00:07:40.535 13208.025 - 13308.849: 86.0625% ( 60) 00:07:40.535 13308.849 - 13409.674: 86.7321% ( 75) 00:07:40.535 13409.674 - 13510.498: 87.0357% ( 34) 00:07:40.535 13510.498 - 13611.323: 87.2768% ( 27) 00:07:40.535 13611.323 - 13712.148: 87.4375% ( 18) 00:07:40.535 13712.148 - 13812.972: 87.7589% ( 36) 00:07:40.535 13812.972 - 13913.797: 88.1518% ( 44) 00:07:40.535 13913.797 - 14014.622: 88.6875% ( 60) 00:07:40.535 14014.622 - 14115.446: 89.4464% ( 85) 00:07:40.535 14115.446 - 14216.271: 90.2054% ( 85) 00:07:40.535 14216.271 - 14317.095: 91.0000% ( 89) 00:07:40.535 14317.095 - 14417.920: 91.7500% ( 84) 00:07:40.535 14417.920 - 14518.745: 92.1518% ( 45) 00:07:40.535 14518.745 - 14619.569: 92.5000% ( 39) 00:07:40.535 14619.569 - 14720.394: 92.8571% ( 40) 00:07:40.535 14720.394 - 14821.218: 93.4018% ( 61) 00:07:40.535 14821.218 - 14922.043: 93.7321% ( 37) 00:07:40.535 14922.043 - 15022.868: 93.9911% ( 29) 00:07:40.535 15022.868 - 15123.692: 94.1607% ( 19) 00:07:40.535 15123.692 - 15224.517: 94.3036% ( 16) 00:07:40.535 15224.517 - 15325.342: 94.5804% ( 31) 00:07:40.535 15325.342 - 15426.166: 95.0357% ( 51) 00:07:40.535 15426.166 - 15526.991: 95.3482% ( 35) 00:07:40.535 15526.991 - 15627.815: 95.6071% ( 29) 00:07:40.535 15627.815 - 15728.640: 95.9107% ( 34) 00:07:40.535 15728.640 - 15829.465: 96.1518% ( 27) 00:07:40.535 15829.465 - 15930.289: 96.4464% ( 33) 00:07:40.535 15930.289 - 16031.114: 96.8036% ( 40) 00:07:40.535 16031.114 - 16131.938: 97.2054% ( 45) 00:07:40.535 16131.938 - 16232.763: 97.8750% ( 75) 00:07:40.535 16232.763 - 16333.588: 98.2054% ( 37) 00:07:40.535 16333.588 - 16434.412: 98.3929% ( 21) 00:07:40.535 16434.412 - 16535.237: 98.5000% ( 12) 00:07:40.535 16535.237 - 16636.062: 98.5893% ( 10) 00:07:40.535 16636.062 - 16736.886: 98.6607% ( 8) 00:07:40.535 16736.886 - 16837.711: 98.7054% ( 5) 00:07:40.535 16837.711 - 16938.535: 98.7411% ( 4) 00:07:40.535 16938.535 - 17039.360: 98.7857% ( 5) 00:07:40.535 17039.360 - 17140.185: 98.8304% ( 5) 00:07:40.535 17140.185 - 17241.009: 98.8571% ( 3) 00:07:40.535 22181.415 - 22282.240: 98.8929% ( 4) 00:07:40.535 22282.240 - 22383.065: 99.1161% ( 25) 00:07:40.535 22383.065 - 22483.889: 99.1518% ( 4) 00:07:40.535 22483.889 - 22584.714: 99.1964% ( 5) 00:07:40.535 22584.714 - 22685.538: 99.2411% ( 5) 00:07:40.535 22685.538 - 22786.363: 99.2768% ( 4) 00:07:40.535 22786.363 - 22887.188: 99.3125% ( 4) 00:07:40.535 22887.188 - 22988.012: 99.3482% ( 4) 00:07:40.535 22988.012 - 23088.837: 99.3929% ( 5) 00:07:40.535 23088.837 - 23189.662: 99.4286% ( 4) 00:07:40.535 25407.803 - 25508.628: 99.4821% ( 6) 00:07:40.535 25508.628 - 25609.452: 99.7500% ( 30) 00:07:40.535 25609.452 - 25710.277: 99.9286% ( 20) 00:07:40.535 25710.277 - 25811.102: 100.0000% ( 8) 00:07:40.535 00:07:40.535 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:40.535 ============================================================================== 00:07:40.535 Range in us Cumulative IO count 00:07:40.535 6604.012 - 6654.425: 0.0089% ( 1) 00:07:40.535 6654.425 - 6704.837: 0.0179% ( 1) 00:07:40.535 6805.662 - 6856.074: 0.0357% ( 2) 00:07:40.535 6856.074 - 6906.486: 0.0714% ( 4) 00:07:40.535 6906.486 - 6956.898: 0.2857% ( 24) 00:07:40.535 6956.898 - 7007.311: 0.4732% ( 21) 00:07:40.535 7007.311 - 7057.723: 0.5089% ( 4) 00:07:40.535 7057.723 - 7108.135: 0.6429% ( 15) 00:07:40.535 7108.135 - 7158.548: 0.7411% ( 11) 00:07:40.535 7158.548 - 7208.960: 0.8304% ( 10) 00:07:40.535 7208.960 - 7259.372: 0.9554% ( 14) 00:07:40.535 7259.372 - 7309.785: 1.2589% ( 34) 00:07:40.535 7309.785 - 7360.197: 1.4196% ( 18) 00:07:40.535 7360.197 - 7410.609: 1.5268% ( 12) 00:07:40.535 7410.609 - 7461.022: 1.6696% ( 16) 00:07:40.535 7461.022 - 7511.434: 1.8304% ( 18) 00:07:40.535 7511.434 - 7561.846: 2.2768% ( 50) 00:07:40.535 7561.846 - 7612.258: 2.4732% ( 22) 00:07:40.535 7612.258 - 7662.671: 2.6250% ( 17) 00:07:40.535 7662.671 - 7713.083: 2.7589% ( 15) 00:07:40.535 7713.083 - 7763.495: 2.9286% ( 19) 00:07:40.535 7763.495 - 7813.908: 3.2500% ( 36) 00:07:40.535 7813.908 - 7864.320: 3.7321% ( 54) 00:07:40.535 7864.320 - 7914.732: 3.8482% ( 13) 00:07:40.535 7914.732 - 7965.145: 3.9464% ( 11) 00:07:40.535 7965.145 - 8015.557: 4.0625% ( 13) 00:07:40.535 8015.557 - 8065.969: 4.1518% ( 10) 00:07:40.535 8065.969 - 8116.382: 4.2679% ( 13) 00:07:40.535 8116.382 - 8166.794: 4.4821% ( 24) 00:07:40.535 8166.794 - 8217.206: 4.6250% ( 16) 00:07:40.535 8217.206 - 8267.618: 4.7054% ( 9) 00:07:40.535 8267.618 - 8318.031: 4.8036% ( 11) 00:07:40.535 8318.031 - 8368.443: 4.8661% ( 7) 00:07:40.535 8368.443 - 8418.855: 5.0268% ( 18) 00:07:40.535 8418.855 - 8469.268: 5.3393% ( 35) 00:07:40.535 8469.268 - 8519.680: 5.5357% ( 22) 00:07:40.535 8519.680 - 8570.092: 5.8393% ( 34) 00:07:40.535 8570.092 - 8620.505: 6.3214% ( 54) 00:07:40.535 8620.505 - 8670.917: 6.6875% ( 41) 00:07:40.535 8670.917 - 8721.329: 6.9375% ( 28) 00:07:40.535 8721.329 - 8771.742: 7.3661% ( 48) 00:07:40.535 8771.742 - 8822.154: 7.7232% ( 40) 00:07:40.535 8822.154 - 8872.566: 8.0268% ( 34) 00:07:40.536 8872.566 - 8922.978: 8.4018% ( 42) 00:07:40.536 8922.978 - 8973.391: 8.7857% ( 43) 00:07:40.536 8973.391 - 9023.803: 9.1161% ( 37) 00:07:40.536 9023.803 - 9074.215: 9.4375% ( 36) 00:07:40.536 9074.215 - 9124.628: 9.9375% ( 56) 00:07:40.536 9124.628 - 9175.040: 10.4286% ( 55) 00:07:40.536 9175.040 - 9225.452: 11.5714% ( 128) 00:07:40.536 9225.452 - 9275.865: 12.0357% ( 52) 00:07:40.536 9275.865 - 9326.277: 12.3661% ( 37) 00:07:40.536 9326.277 - 9376.689: 12.7500% ( 43) 00:07:40.536 9376.689 - 9427.102: 13.1339% ( 43) 00:07:40.536 9427.102 - 9477.514: 13.5804% ( 50) 00:07:40.536 9477.514 - 9527.926: 14.0268% ( 50) 00:07:40.536 9527.926 - 9578.338: 14.6875% ( 74) 00:07:40.536 9578.338 - 9628.751: 15.2321% ( 61) 00:07:40.536 9628.751 - 9679.163: 15.8304% ( 67) 00:07:40.536 9679.163 - 9729.575: 16.7589% ( 104) 00:07:40.536 9729.575 - 9779.988: 17.9107% ( 129) 00:07:40.536 9779.988 - 9830.400: 18.5982% ( 77) 00:07:40.536 9830.400 - 9880.812: 19.5893% ( 111) 00:07:40.536 9880.812 - 9931.225: 20.3482% ( 85) 00:07:40.536 9931.225 - 9981.637: 21.2232% ( 98) 00:07:40.536 9981.637 - 10032.049: 21.9286% ( 79) 00:07:40.536 10032.049 - 10082.462: 22.8393% ( 102) 00:07:40.536 10082.462 - 10132.874: 23.4554% ( 69) 00:07:40.536 10132.874 - 10183.286: 24.0000% ( 61) 00:07:40.536 10183.286 - 10233.698: 24.7768% ( 87) 00:07:40.536 10233.698 - 10284.111: 25.5625% ( 88) 00:07:40.536 10284.111 - 10334.523: 26.3482% ( 88) 00:07:40.536 10334.523 - 10384.935: 27.2589% ( 102) 00:07:40.536 10384.935 - 10435.348: 28.1875% ( 104) 00:07:40.536 10435.348 - 10485.760: 29.3393% ( 129) 00:07:40.536 10485.760 - 10536.172: 30.4375% ( 123) 00:07:40.536 10536.172 - 10586.585: 31.7321% ( 145) 00:07:40.536 10586.585 - 10636.997: 32.6696% ( 105) 00:07:40.536 10636.997 - 10687.409: 33.8393% ( 131) 00:07:40.536 10687.409 - 10737.822: 35.0179% ( 132) 00:07:40.536 10737.822 - 10788.234: 36.4911% ( 165) 00:07:40.536 10788.234 - 10838.646: 37.9286% ( 161) 00:07:40.536 10838.646 - 10889.058: 39.3750% ( 162) 00:07:40.536 10889.058 - 10939.471: 41.0089% ( 183) 00:07:40.536 10939.471 - 10989.883: 42.5089% ( 168) 00:07:40.536 10989.883 - 11040.295: 43.9375% ( 160) 00:07:40.536 11040.295 - 11090.708: 45.3661% ( 160) 00:07:40.536 11090.708 - 11141.120: 46.8929% ( 171) 00:07:40.536 11141.120 - 11191.532: 48.5357% ( 184) 00:07:40.536 11191.532 - 11241.945: 49.9643% ( 160) 00:07:40.536 11241.945 - 11292.357: 51.2857% ( 148) 00:07:40.536 11292.357 - 11342.769: 52.9286% ( 184) 00:07:40.536 11342.769 - 11393.182: 54.5446% ( 181) 00:07:40.536 11393.182 - 11443.594: 55.9732% ( 160) 00:07:40.536 11443.594 - 11494.006: 57.0625% ( 122) 00:07:40.536 11494.006 - 11544.418: 58.5536% ( 167) 00:07:40.536 11544.418 - 11594.831: 59.7411% ( 133) 00:07:40.536 11594.831 - 11645.243: 61.0000% ( 141) 00:07:40.536 11645.243 - 11695.655: 62.0714% ( 120) 00:07:40.536 11695.655 - 11746.068: 63.2589% ( 133) 00:07:40.536 11746.068 - 11796.480: 64.5625% ( 146) 00:07:40.536 11796.480 - 11846.892: 65.6429% ( 121) 00:07:40.536 11846.892 - 11897.305: 66.5268% ( 99) 00:07:40.536 11897.305 - 11947.717: 67.3929% ( 97) 00:07:40.536 11947.717 - 11998.129: 68.4911% ( 123) 00:07:40.536 11998.129 - 12048.542: 69.4286% ( 105) 00:07:40.536 12048.542 - 12098.954: 70.3482% ( 103) 00:07:40.536 12098.954 - 12149.366: 71.1161% ( 86) 00:07:40.536 12149.366 - 12199.778: 72.1161% ( 112) 00:07:40.536 12199.778 - 12250.191: 72.9107% ( 89) 00:07:40.536 12250.191 - 12300.603: 73.7946% ( 99) 00:07:40.536 12300.603 - 12351.015: 74.6250% ( 93) 00:07:40.536 12351.015 - 12401.428: 75.4643% ( 94) 00:07:40.536 12401.428 - 12451.840: 76.3750% ( 102) 00:07:40.536 12451.840 - 12502.252: 77.1429% ( 86) 00:07:40.536 12502.252 - 12552.665: 77.9732% ( 93) 00:07:40.536 12552.665 - 12603.077: 79.0000% ( 115) 00:07:40.536 12603.077 - 12653.489: 79.9018% ( 101) 00:07:40.536 12653.489 - 12703.902: 80.8125% ( 102) 00:07:40.536 12703.902 - 12754.314: 81.5446% ( 82) 00:07:40.536 12754.314 - 12804.726: 81.9911% ( 50) 00:07:40.536 12804.726 - 12855.138: 82.3661% ( 42) 00:07:40.536 12855.138 - 12905.551: 82.7321% ( 41) 00:07:40.536 12905.551 - 13006.375: 83.4464% ( 80) 00:07:40.536 13006.375 - 13107.200: 84.2946% ( 95) 00:07:40.536 13107.200 - 13208.025: 84.8571% ( 63) 00:07:40.536 13208.025 - 13308.849: 85.4821% ( 70) 00:07:40.536 13308.849 - 13409.674: 86.3036% ( 92) 00:07:40.536 13409.674 - 13510.498: 86.7589% ( 51) 00:07:40.536 13510.498 - 13611.323: 87.2857% ( 59) 00:07:40.536 13611.323 - 13712.148: 87.7589% ( 53) 00:07:40.536 13712.148 - 13812.972: 88.1429% ( 43) 00:07:40.536 13812.972 - 13913.797: 88.5982% ( 51) 00:07:40.536 13913.797 - 14014.622: 89.0714% ( 53) 00:07:40.536 14014.622 - 14115.446: 89.4821% ( 46) 00:07:40.536 14115.446 - 14216.271: 90.1071% ( 70) 00:07:40.536 14216.271 - 14317.095: 91.0089% ( 101) 00:07:40.536 14317.095 - 14417.920: 91.6696% ( 74) 00:07:40.536 14417.920 - 14518.745: 92.2054% ( 60) 00:07:40.536 14518.745 - 14619.569: 92.6875% ( 54) 00:07:40.536 14619.569 - 14720.394: 93.1339% ( 50) 00:07:40.536 14720.394 - 14821.218: 93.4643% ( 37) 00:07:40.536 14821.218 - 14922.043: 93.7411% ( 31) 00:07:40.536 14922.043 - 15022.868: 93.9375% ( 22) 00:07:40.536 15022.868 - 15123.692: 94.0536% ( 13) 00:07:40.536 15123.692 - 15224.517: 94.2589% ( 23) 00:07:40.536 15224.517 - 15325.342: 94.4732% ( 24) 00:07:40.536 15325.342 - 15426.166: 94.7589% ( 32) 00:07:40.536 15426.166 - 15526.991: 95.1786% ( 47) 00:07:40.536 15526.991 - 15627.815: 95.5000% ( 36) 00:07:40.536 15627.815 - 15728.640: 95.9018% ( 45) 00:07:40.536 15728.640 - 15829.465: 96.3036% ( 45) 00:07:40.536 15829.465 - 15930.289: 96.5536% ( 28) 00:07:40.536 15930.289 - 16031.114: 96.8036% ( 28) 00:07:40.536 16031.114 - 16131.938: 97.0179% ( 24) 00:07:40.536 16131.938 - 16232.763: 97.2946% ( 31) 00:07:40.536 16232.763 - 16333.588: 97.5536% ( 29) 00:07:40.536 16333.588 - 16434.412: 97.8304% ( 31) 00:07:40.536 16434.412 - 16535.237: 98.0268% ( 22) 00:07:40.536 16535.237 - 16636.062: 98.2411% ( 24) 00:07:40.536 16636.062 - 16736.886: 98.4732% ( 26) 00:07:40.536 16736.886 - 16837.711: 98.5982% ( 14) 00:07:40.536 16837.711 - 16938.535: 98.7500% ( 17) 00:07:40.536 16938.535 - 17039.360: 98.8304% ( 9) 00:07:40.536 17039.360 - 17140.185: 98.8571% ( 3) 00:07:40.536 19963.274 - 20064.098: 98.8839% ( 3) 00:07:40.536 20064.098 - 20164.923: 98.9286% ( 5) 00:07:40.536 20164.923 - 20265.748: 98.9732% ( 5) 00:07:40.536 20265.748 - 20366.572: 99.0268% ( 6) 00:07:40.536 20366.572 - 20467.397: 99.0714% ( 5) 00:07:40.536 20467.397 - 20568.222: 99.1161% ( 5) 00:07:40.536 20568.222 - 20669.046: 99.1607% ( 5) 00:07:40.536 20669.046 - 20769.871: 99.1964% ( 4) 00:07:40.536 20769.871 - 20870.695: 99.2411% ( 5) 00:07:40.536 20870.695 - 20971.520: 99.2768% ( 4) 00:07:40.536 20971.520 - 21072.345: 99.3304% ( 6) 00:07:40.536 21072.345 - 21173.169: 99.3750% ( 5) 00:07:40.536 21173.169 - 21273.994: 99.4196% ( 5) 00:07:40.536 21273.994 - 21374.818: 99.4286% ( 1) 00:07:40.536 24601.206 - 24702.031: 99.4732% ( 5) 00:07:40.536 24702.031 - 24802.855: 99.5089% ( 4) 00:07:40.536 24802.855 - 24903.680: 99.5536% ( 5) 00:07:40.536 24903.680 - 25004.505: 99.5893% ( 4) 00:07:40.536 25004.505 - 25105.329: 99.6339% ( 5) 00:07:40.536 25105.329 - 25206.154: 99.6696% ( 4) 00:07:40.536 25206.154 - 25306.978: 99.6964% ( 3) 00:07:40.536 25306.978 - 25407.803: 99.7232% ( 3) 00:07:40.536 25407.803 - 25508.628: 99.7500% ( 3) 00:07:40.536 25508.628 - 25609.452: 99.7768% ( 3) 00:07:40.536 25609.452 - 25710.277: 99.8036% ( 3) 00:07:40.536 25710.277 - 25811.102: 99.8304% ( 3) 00:07:40.536 25811.102 - 26012.751: 99.8839% ( 6) 00:07:40.536 26012.751 - 26214.400: 99.9375% ( 6) 00:07:40.536 26214.400 - 26416.049: 99.9911% ( 6) 00:07:40.536 26416.049 - 26617.698: 100.0000% ( 1) 00:07:40.536 00:07:40.536 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:40.536 ============================================================================== 00:07:40.536 Range in us Cumulative IO count 00:07:40.536 6704.837 - 6755.249: 0.0089% ( 1) 00:07:40.536 6755.249 - 6805.662: 0.0268% ( 2) 00:07:40.536 6805.662 - 6856.074: 0.0893% ( 7) 00:07:40.536 6856.074 - 6906.486: 0.1607% ( 8) 00:07:40.536 6906.486 - 6956.898: 0.2321% ( 8) 00:07:40.536 6956.898 - 7007.311: 0.4107% ( 20) 00:07:40.536 7007.311 - 7057.723: 0.5536% ( 16) 00:07:40.536 7057.723 - 7108.135: 0.6875% ( 15) 00:07:40.536 7108.135 - 7158.548: 0.8125% ( 14) 00:07:40.536 7158.548 - 7208.960: 0.9464% ( 15) 00:07:40.536 7208.960 - 7259.372: 1.0982% ( 17) 00:07:40.536 7259.372 - 7309.785: 1.3214% ( 25) 00:07:40.536 7309.785 - 7360.197: 1.5268% ( 23) 00:07:40.536 7360.197 - 7410.609: 1.7857% ( 29) 00:07:40.536 7410.609 - 7461.022: 2.0089% ( 25) 00:07:40.536 7461.022 - 7511.434: 2.2321% ( 25) 00:07:40.536 7511.434 - 7561.846: 2.4107% ( 20) 00:07:40.536 7561.846 - 7612.258: 2.4732% ( 7) 00:07:40.536 7612.258 - 7662.671: 2.5625% ( 10) 00:07:40.536 7662.671 - 7713.083: 2.6875% ( 14) 00:07:40.536 7713.083 - 7763.495: 2.7589% ( 8) 00:07:40.536 7763.495 - 7813.908: 2.9107% ( 17) 00:07:40.536 7813.908 - 7864.320: 3.0357% ( 14) 00:07:40.536 7864.320 - 7914.732: 3.1250% ( 10) 00:07:40.536 7914.732 - 7965.145: 3.2321% ( 12) 00:07:40.536 7965.145 - 8015.557: 3.3750% ( 16) 00:07:40.536 8015.557 - 8065.969: 3.5893% ( 24) 00:07:40.536 8065.969 - 8116.382: 3.6964% ( 12) 00:07:40.537 8116.382 - 8166.794: 3.7768% ( 9) 00:07:40.537 8166.794 - 8217.206: 3.8750% ( 11) 00:07:40.537 8217.206 - 8267.618: 4.0536% ( 20) 00:07:40.537 8267.618 - 8318.031: 4.3571% ( 34) 00:07:40.537 8318.031 - 8368.443: 4.7857% ( 48) 00:07:40.537 8368.443 - 8418.855: 5.0893% ( 34) 00:07:40.537 8418.855 - 8469.268: 5.5625% ( 53) 00:07:40.537 8469.268 - 8519.680: 5.9018% ( 38) 00:07:40.537 8519.680 - 8570.092: 6.2679% ( 41) 00:07:40.537 8570.092 - 8620.505: 6.6071% ( 38) 00:07:40.537 8620.505 - 8670.917: 6.8839% ( 31) 00:07:40.537 8670.917 - 8721.329: 7.2054% ( 36) 00:07:40.537 8721.329 - 8771.742: 7.5000% ( 33) 00:07:40.537 8771.742 - 8822.154: 7.8482% ( 39) 00:07:40.537 8822.154 - 8872.566: 8.1429% ( 33) 00:07:40.537 8872.566 - 8922.978: 8.4018% ( 29) 00:07:40.537 8922.978 - 8973.391: 8.7768% ( 42) 00:07:40.537 8973.391 - 9023.803: 9.1071% ( 37) 00:07:40.537 9023.803 - 9074.215: 9.6696% ( 63) 00:07:40.537 9074.215 - 9124.628: 10.1696% ( 56) 00:07:40.537 9124.628 - 9175.040: 10.8929% ( 81) 00:07:40.537 9175.040 - 9225.452: 11.5714% ( 76) 00:07:40.537 9225.452 - 9275.865: 12.0179% ( 50) 00:07:40.537 9275.865 - 9326.277: 12.4643% ( 50) 00:07:40.537 9326.277 - 9376.689: 13.0357% ( 64) 00:07:40.537 9376.689 - 9427.102: 13.6339% ( 67) 00:07:40.537 9427.102 - 9477.514: 14.4375% ( 90) 00:07:40.537 9477.514 - 9527.926: 15.1161% ( 76) 00:07:40.537 9527.926 - 9578.338: 15.6607% ( 61) 00:07:40.537 9578.338 - 9628.751: 16.2500% ( 66) 00:07:40.537 9628.751 - 9679.163: 16.8482% ( 67) 00:07:40.537 9679.163 - 9729.575: 17.4196% ( 64) 00:07:40.537 9729.575 - 9779.988: 17.9107% ( 55) 00:07:40.537 9779.988 - 9830.400: 18.5982% ( 77) 00:07:40.537 9830.400 - 9880.812: 19.1250% ( 59) 00:07:40.537 9880.812 - 9931.225: 19.8214% ( 78) 00:07:40.537 9931.225 - 9981.637: 20.3839% ( 63) 00:07:40.537 9981.637 - 10032.049: 21.1250% ( 83) 00:07:40.537 10032.049 - 10082.462: 21.9196% ( 89) 00:07:40.537 10082.462 - 10132.874: 22.9732% ( 118) 00:07:40.537 10132.874 - 10183.286: 24.2054% ( 138) 00:07:40.537 10183.286 - 10233.698: 25.1875% ( 110) 00:07:40.537 10233.698 - 10284.111: 26.1518% ( 108) 00:07:40.537 10284.111 - 10334.523: 27.1607% ( 113) 00:07:40.537 10334.523 - 10384.935: 28.1339% ( 109) 00:07:40.537 10384.935 - 10435.348: 28.9464% ( 91) 00:07:40.537 10435.348 - 10485.760: 29.8393% ( 100) 00:07:40.537 10485.760 - 10536.172: 30.9821% ( 128) 00:07:40.537 10536.172 - 10586.585: 32.0179% ( 116) 00:07:40.537 10586.585 - 10636.997: 33.4107% ( 156) 00:07:40.537 10636.997 - 10687.409: 35.0000% ( 178) 00:07:40.537 10687.409 - 10737.822: 36.4554% ( 163) 00:07:40.537 10737.822 - 10788.234: 38.0268% ( 176) 00:07:40.537 10788.234 - 10838.646: 39.2679% ( 139) 00:07:40.537 10838.646 - 10889.058: 40.5536% ( 144) 00:07:40.537 10889.058 - 10939.471: 41.8929% ( 150) 00:07:40.537 10939.471 - 10989.883: 43.0893% ( 134) 00:07:40.537 10989.883 - 11040.295: 44.3482% ( 141) 00:07:40.537 11040.295 - 11090.708: 45.5714% ( 137) 00:07:40.537 11090.708 - 11141.120: 46.9554% ( 155) 00:07:40.537 11141.120 - 11191.532: 48.3214% ( 153) 00:07:40.537 11191.532 - 11241.945: 49.6161% ( 145) 00:07:40.537 11241.945 - 11292.357: 50.9821% ( 153) 00:07:40.537 11292.357 - 11342.769: 52.7679% ( 200) 00:07:40.537 11342.769 - 11393.182: 54.2500% ( 166) 00:07:40.537 11393.182 - 11443.594: 55.9107% ( 186) 00:07:40.537 11443.594 - 11494.006: 57.6161% ( 191) 00:07:40.537 11494.006 - 11544.418: 59.0625% ( 162) 00:07:40.537 11544.418 - 11594.831: 60.1339% ( 120) 00:07:40.537 11594.831 - 11645.243: 61.0536% ( 103) 00:07:40.537 11645.243 - 11695.655: 61.9911% ( 105) 00:07:40.537 11695.655 - 11746.068: 63.1786% ( 133) 00:07:40.537 11746.068 - 11796.480: 64.3750% ( 134) 00:07:40.537 11796.480 - 11846.892: 65.6071% ( 138) 00:07:40.537 11846.892 - 11897.305: 66.5804% ( 109) 00:07:40.537 11897.305 - 11947.717: 67.6786% ( 123) 00:07:40.537 11947.717 - 11998.129: 68.6339% ( 107) 00:07:40.537 11998.129 - 12048.542: 69.5446% ( 102) 00:07:40.537 12048.542 - 12098.954: 70.4643% ( 103) 00:07:40.537 12098.954 - 12149.366: 71.3482% ( 99) 00:07:40.537 12149.366 - 12199.778: 72.2768% ( 104) 00:07:40.537 12199.778 - 12250.191: 73.1875% ( 102) 00:07:40.537 12250.191 - 12300.603: 74.1607% ( 109) 00:07:40.537 12300.603 - 12351.015: 75.0089% ( 95) 00:07:40.537 12351.015 - 12401.428: 76.0446% ( 116) 00:07:40.537 12401.428 - 12451.840: 76.9018% ( 96) 00:07:40.537 12451.840 - 12502.252: 77.7321% ( 93) 00:07:40.537 12502.252 - 12552.665: 78.2321% ( 56) 00:07:40.537 12552.665 - 12603.077: 78.7321% ( 56) 00:07:40.537 12603.077 - 12653.489: 79.4375% ( 79) 00:07:40.537 12653.489 - 12703.902: 79.9107% ( 53) 00:07:40.537 12703.902 - 12754.314: 80.5089% ( 67) 00:07:40.537 12754.314 - 12804.726: 81.0268% ( 58) 00:07:40.537 12804.726 - 12855.138: 81.5804% ( 62) 00:07:40.537 12855.138 - 12905.551: 82.2054% ( 70) 00:07:40.537 12905.551 - 13006.375: 83.4018% ( 134) 00:07:40.537 13006.375 - 13107.200: 84.2589% ( 96) 00:07:40.537 13107.200 - 13208.025: 85.2857% ( 115) 00:07:40.537 13208.025 - 13308.849: 85.9732% ( 77) 00:07:40.537 13308.849 - 13409.674: 86.6429% ( 75) 00:07:40.537 13409.674 - 13510.498: 87.3393% ( 78) 00:07:40.537 13510.498 - 13611.323: 87.7946% ( 51) 00:07:40.537 13611.323 - 13712.148: 88.2589% ( 52) 00:07:40.537 13712.148 - 13812.972: 88.8661% ( 68) 00:07:40.537 13812.972 - 13913.797: 89.4375% ( 64) 00:07:40.537 13913.797 - 14014.622: 89.7143% ( 31) 00:07:40.537 14014.622 - 14115.446: 90.2143% ( 56) 00:07:40.537 14115.446 - 14216.271: 90.5357% ( 36) 00:07:40.537 14216.271 - 14317.095: 91.0268% ( 55) 00:07:40.537 14317.095 - 14417.920: 91.5179% ( 55) 00:07:40.537 14417.920 - 14518.745: 91.8750% ( 40) 00:07:40.537 14518.745 - 14619.569: 92.0982% ( 25) 00:07:40.537 14619.569 - 14720.394: 92.3571% ( 29) 00:07:40.537 14720.394 - 14821.218: 92.6071% ( 28) 00:07:40.537 14821.218 - 14922.043: 93.0000% ( 44) 00:07:40.537 14922.043 - 15022.868: 93.2143% ( 24) 00:07:40.537 15022.868 - 15123.692: 93.4464% ( 26) 00:07:40.537 15123.692 - 15224.517: 93.8214% ( 42) 00:07:40.537 15224.517 - 15325.342: 94.1875% ( 41) 00:07:40.537 15325.342 - 15426.166: 94.7232% ( 60) 00:07:40.537 15426.166 - 15526.991: 95.1071% ( 43) 00:07:40.537 15526.991 - 15627.815: 95.3661% ( 29) 00:07:40.537 15627.815 - 15728.640: 95.6250% ( 29) 00:07:40.537 15728.640 - 15829.465: 95.9375% ( 35) 00:07:40.537 15829.465 - 15930.289: 96.2500% ( 35) 00:07:40.537 15930.289 - 16031.114: 96.4732% ( 25) 00:07:40.537 16031.114 - 16131.938: 96.7857% ( 35) 00:07:40.537 16131.938 - 16232.763: 97.0804% ( 33) 00:07:40.537 16232.763 - 16333.588: 97.3304% ( 28) 00:07:40.537 16333.588 - 16434.412: 97.4732% ( 16) 00:07:40.537 16434.412 - 16535.237: 97.6161% ( 16) 00:07:40.537 16535.237 - 16636.062: 97.8929% ( 31) 00:07:40.537 16636.062 - 16736.886: 98.0357% ( 16) 00:07:40.537 16736.886 - 16837.711: 98.1786% ( 16) 00:07:40.537 16837.711 - 16938.535: 98.2857% ( 12) 00:07:40.537 16938.535 - 17039.360: 98.3661% ( 9) 00:07:40.537 17039.360 - 17140.185: 98.4643% ( 11) 00:07:40.537 17140.185 - 17241.009: 98.5357% ( 8) 00:07:40.537 17241.009 - 17341.834: 98.6161% ( 9) 00:07:40.537 17341.834 - 17442.658: 98.7143% ( 11) 00:07:40.537 17442.658 - 17543.483: 98.7679% ( 6) 00:07:40.537 17543.483 - 17644.308: 98.8571% ( 10) 00:07:40.537 18350.080 - 18450.905: 98.8661% ( 1) 00:07:40.537 18450.905 - 18551.729: 98.8929% ( 3) 00:07:40.537 18551.729 - 18652.554: 98.9464% ( 6) 00:07:40.537 18652.554 - 18753.378: 98.9821% ( 4) 00:07:40.537 18753.378 - 18854.203: 99.0179% ( 4) 00:07:40.537 18854.203 - 18955.028: 99.0625% ( 5) 00:07:40.537 18955.028 - 19055.852: 99.0982% ( 4) 00:07:40.537 19055.852 - 19156.677: 99.1429% ( 5) 00:07:40.537 19156.677 - 19257.502: 99.1696% ( 3) 00:07:40.537 19257.502 - 19358.326: 99.2143% ( 5) 00:07:40.537 19358.326 - 19459.151: 99.2589% ( 5) 00:07:40.537 19459.151 - 19559.975: 99.2857% ( 3) 00:07:40.537 19559.975 - 19660.800: 99.3393% ( 6) 00:07:40.537 19660.800 - 19761.625: 99.3661% ( 3) 00:07:40.537 19761.625 - 19862.449: 99.4107% ( 5) 00:07:40.537 19862.449 - 19963.274: 99.4286% ( 2) 00:07:40.537 23592.960 - 23693.785: 99.4375% ( 1) 00:07:40.537 23693.785 - 23794.609: 99.4643% ( 3) 00:07:40.537 23794.609 - 23895.434: 99.4821% ( 2) 00:07:40.537 23895.434 - 23996.258: 99.5089% ( 3) 00:07:40.537 23996.258 - 24097.083: 99.5357% ( 3) 00:07:40.537 24097.083 - 24197.908: 99.5625% ( 3) 00:07:40.537 24197.908 - 24298.732: 99.5804% ( 2) 00:07:40.537 24298.732 - 24399.557: 99.6071% ( 3) 00:07:40.537 24399.557 - 24500.382: 99.6250% ( 2) 00:07:40.537 24500.382 - 24601.206: 99.6518% ( 3) 00:07:40.537 24601.206 - 24702.031: 99.6786% ( 3) 00:07:40.537 24702.031 - 24802.855: 99.7054% ( 3) 00:07:40.537 24802.855 - 24903.680: 99.7232% ( 2) 00:07:40.537 24903.680 - 25004.505: 99.7411% ( 2) 00:07:40.537 25004.505 - 25105.329: 99.7679% ( 3) 00:07:40.537 25105.329 - 25206.154: 99.7946% ( 3) 00:07:40.537 25206.154 - 25306.978: 99.8214% ( 3) 00:07:40.537 25306.978 - 25407.803: 99.8393% ( 2) 00:07:40.538 25407.803 - 25508.628: 99.8571% ( 2) 00:07:40.538 25508.628 - 25609.452: 99.8929% ( 4) 00:07:40.538 25609.452 - 25710.277: 99.9107% ( 2) 00:07:40.538 25710.277 - 25811.102: 99.9375% ( 3) 00:07:40.538 25811.102 - 26012.751: 99.9911% ( 6) 00:07:40.538 26012.751 - 26214.400: 100.0000% ( 1) 00:07:40.538 00:07:40.538 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:40.538 ============================================================================== 00:07:40.538 Range in us Cumulative IO count 00:07:40.538 6755.249 - 6805.662: 0.0179% ( 2) 00:07:40.538 6805.662 - 6856.074: 0.0357% ( 2) 00:07:40.538 6956.898 - 7007.311: 0.0625% ( 3) 00:07:40.538 7007.311 - 7057.723: 0.1429% ( 9) 00:07:40.538 7057.723 - 7108.135: 0.2768% ( 15) 00:07:40.538 7108.135 - 7158.548: 0.7321% ( 51) 00:07:40.538 7158.548 - 7208.960: 1.0446% ( 35) 00:07:40.538 7208.960 - 7259.372: 1.2054% ( 18) 00:07:40.538 7259.372 - 7309.785: 1.4107% ( 23) 00:07:40.538 7309.785 - 7360.197: 1.5893% ( 20) 00:07:40.538 7360.197 - 7410.609: 1.6786% ( 10) 00:07:40.538 7410.609 - 7461.022: 1.7232% ( 5) 00:07:40.538 7461.022 - 7511.434: 1.7946% ( 8) 00:07:40.538 7511.434 - 7561.846: 1.9018% ( 12) 00:07:40.538 7561.846 - 7612.258: 2.0089% ( 12) 00:07:40.538 7612.258 - 7662.671: 2.4732% ( 52) 00:07:40.538 7662.671 - 7713.083: 2.5982% ( 14) 00:07:40.538 7713.083 - 7763.495: 2.8036% ( 23) 00:07:40.538 7763.495 - 7813.908: 2.8839% ( 9) 00:07:40.538 7813.908 - 7864.320: 2.9375% ( 6) 00:07:40.538 7864.320 - 7914.732: 3.1429% ( 23) 00:07:40.538 7914.732 - 7965.145: 3.3036% ( 18) 00:07:40.538 7965.145 - 8015.557: 3.3393% ( 4) 00:07:40.538 8015.557 - 8065.969: 3.3750% ( 4) 00:07:40.538 8065.969 - 8116.382: 3.4643% ( 10) 00:07:40.538 8116.382 - 8166.794: 3.5804% ( 13) 00:07:40.538 8166.794 - 8217.206: 3.6786% ( 11) 00:07:40.538 8217.206 - 8267.618: 3.8839% ( 23) 00:07:40.538 8267.618 - 8318.031: 4.1250% ( 27) 00:07:40.538 8318.031 - 8368.443: 4.3393% ( 24) 00:07:40.538 8368.443 - 8418.855: 4.5893% ( 28) 00:07:40.538 8418.855 - 8469.268: 4.9107% ( 36) 00:07:40.538 8469.268 - 8519.680: 5.5000% ( 66) 00:07:40.538 8519.680 - 8570.092: 5.7679% ( 30) 00:07:40.538 8570.092 - 8620.505: 6.3125% ( 61) 00:07:40.538 8620.505 - 8670.917: 6.6696% ( 40) 00:07:40.538 8670.917 - 8721.329: 7.1071% ( 49) 00:07:40.538 8721.329 - 8771.742: 7.5089% ( 45) 00:07:40.538 8771.742 - 8822.154: 7.7679% ( 29) 00:07:40.538 8822.154 - 8872.566: 8.0357% ( 30) 00:07:40.538 8872.566 - 8922.978: 8.3750% ( 38) 00:07:40.538 8922.978 - 8973.391: 8.6429% ( 30) 00:07:40.538 8973.391 - 9023.803: 8.9643% ( 36) 00:07:40.538 9023.803 - 9074.215: 9.5625% ( 67) 00:07:40.538 9074.215 - 9124.628: 10.1786% ( 69) 00:07:40.538 9124.628 - 9175.040: 10.6429% ( 52) 00:07:40.538 9175.040 - 9225.452: 11.2857% ( 72) 00:07:40.538 9225.452 - 9275.865: 11.8571% ( 64) 00:07:40.538 9275.865 - 9326.277: 12.2768% ( 47) 00:07:40.538 9326.277 - 9376.689: 12.8482% ( 64) 00:07:40.538 9376.689 - 9427.102: 13.4821% ( 71) 00:07:40.538 9427.102 - 9477.514: 14.0804% ( 67) 00:07:40.538 9477.514 - 9527.926: 14.6786% ( 67) 00:07:40.538 9527.926 - 9578.338: 15.5179% ( 94) 00:07:40.538 9578.338 - 9628.751: 16.3036% ( 88) 00:07:40.538 9628.751 - 9679.163: 17.0536% ( 84) 00:07:40.538 9679.163 - 9729.575: 17.9643% ( 102) 00:07:40.538 9729.575 - 9779.988: 18.9821% ( 114) 00:07:40.538 9779.988 - 9830.400: 19.8482% ( 97) 00:07:40.538 9830.400 - 9880.812: 20.5089% ( 74) 00:07:40.538 9880.812 - 9931.225: 21.0089% ( 56) 00:07:40.538 9931.225 - 9981.637: 21.6518% ( 72) 00:07:40.538 9981.637 - 10032.049: 22.2589% ( 68) 00:07:40.538 10032.049 - 10082.462: 22.8125% ( 62) 00:07:40.538 10082.462 - 10132.874: 23.3036% ( 55) 00:07:40.538 10132.874 - 10183.286: 23.7500% ( 50) 00:07:40.538 10183.286 - 10233.698: 24.4821% ( 82) 00:07:40.538 10233.698 - 10284.111: 25.1964% ( 80) 00:07:40.538 10284.111 - 10334.523: 25.9375% ( 83) 00:07:40.538 10334.523 - 10384.935: 27.0446% ( 124) 00:07:40.538 10384.935 - 10435.348: 27.9732% ( 104) 00:07:40.538 10435.348 - 10485.760: 29.2411% ( 142) 00:07:40.538 10485.760 - 10536.172: 30.5268% ( 144) 00:07:40.538 10536.172 - 10586.585: 31.7232% ( 134) 00:07:40.538 10586.585 - 10636.997: 33.0804% ( 152) 00:07:40.538 10636.997 - 10687.409: 34.2232% ( 128) 00:07:40.538 10687.409 - 10737.822: 35.5446% ( 148) 00:07:40.538 10737.822 - 10788.234: 37.1607% ( 181) 00:07:40.538 10788.234 - 10838.646: 38.6607% ( 168) 00:07:40.538 10838.646 - 10889.058: 40.1071% ( 162) 00:07:40.538 10889.058 - 10939.471: 41.5268% ( 159) 00:07:40.538 10939.471 - 10989.883: 42.9196% ( 156) 00:07:40.538 10989.883 - 11040.295: 44.5268% ( 180) 00:07:40.538 11040.295 - 11090.708: 46.0268% ( 168) 00:07:40.538 11090.708 - 11141.120: 47.7857% ( 197) 00:07:40.538 11141.120 - 11191.532: 49.3571% ( 176) 00:07:40.538 11191.532 - 11241.945: 50.9018% ( 173) 00:07:40.538 11241.945 - 11292.357: 52.5000% ( 179) 00:07:40.538 11292.357 - 11342.769: 53.9643% ( 164) 00:07:40.538 11342.769 - 11393.182: 55.3571% ( 156) 00:07:40.538 11393.182 - 11443.594: 56.5982% ( 139) 00:07:40.538 11443.594 - 11494.006: 57.8661% ( 142) 00:07:40.538 11494.006 - 11544.418: 59.0357% ( 131) 00:07:40.538 11544.418 - 11594.831: 60.3482% ( 147) 00:07:40.538 11594.831 - 11645.243: 61.7143% ( 153) 00:07:40.538 11645.243 - 11695.655: 63.0000% ( 144) 00:07:40.538 11695.655 - 11746.068: 64.1786% ( 132) 00:07:40.538 11746.068 - 11796.480: 65.1339% ( 107) 00:07:40.538 11796.480 - 11846.892: 66.2054% ( 120) 00:07:40.538 11846.892 - 11897.305: 67.2768% ( 120) 00:07:40.538 11897.305 - 11947.717: 68.3661% ( 122) 00:07:40.538 11947.717 - 11998.129: 69.5714% ( 135) 00:07:40.538 11998.129 - 12048.542: 70.6161% ( 117) 00:07:40.538 12048.542 - 12098.954: 71.5268% ( 102) 00:07:40.538 12098.954 - 12149.366: 72.3482% ( 92) 00:07:40.538 12149.366 - 12199.778: 73.2411% ( 100) 00:07:40.538 12199.778 - 12250.191: 74.1786% ( 105) 00:07:40.538 12250.191 - 12300.603: 74.8125% ( 71) 00:07:40.538 12300.603 - 12351.015: 75.4107% ( 67) 00:07:40.538 12351.015 - 12401.428: 76.0446% ( 71) 00:07:40.538 12401.428 - 12451.840: 76.6696% ( 70) 00:07:40.538 12451.840 - 12502.252: 77.2232% ( 62) 00:07:40.538 12502.252 - 12552.665: 77.8393% ( 69) 00:07:40.538 12552.665 - 12603.077: 78.5000% ( 74) 00:07:40.538 12603.077 - 12653.489: 78.9732% ( 53) 00:07:40.538 12653.489 - 12703.902: 79.4196% ( 50) 00:07:40.538 12703.902 - 12754.314: 79.8125% ( 44) 00:07:40.538 12754.314 - 12804.726: 80.3661% ( 62) 00:07:40.538 12804.726 - 12855.138: 81.0000% ( 71) 00:07:40.538 12855.138 - 12905.551: 81.5179% ( 58) 00:07:40.538 12905.551 - 13006.375: 82.6518% ( 127) 00:07:40.538 13006.375 - 13107.200: 83.8929% ( 139) 00:07:40.538 13107.200 - 13208.025: 85.0000% ( 124) 00:07:40.538 13208.025 - 13308.849: 86.1518% ( 129) 00:07:40.538 13308.849 - 13409.674: 87.0089% ( 96) 00:07:40.538 13409.674 - 13510.498: 87.6964% ( 77) 00:07:40.538 13510.498 - 13611.323: 88.3571% ( 74) 00:07:40.538 13611.323 - 13712.148: 88.7679% ( 46) 00:07:40.538 13712.148 - 13812.972: 89.1696% ( 45) 00:07:40.538 13812.972 - 13913.797: 89.6696% ( 56) 00:07:40.538 13913.797 - 14014.622: 90.0000% ( 37) 00:07:40.538 14014.622 - 14115.446: 90.3214% ( 36) 00:07:40.538 14115.446 - 14216.271: 90.6696% ( 39) 00:07:40.538 14216.271 - 14317.095: 91.1071% ( 49) 00:07:40.538 14317.095 - 14417.920: 91.4554% ( 39) 00:07:40.538 14417.920 - 14518.745: 91.7946% ( 38) 00:07:40.538 14518.745 - 14619.569: 92.1429% ( 39) 00:07:40.538 14619.569 - 14720.394: 92.4821% ( 38) 00:07:40.538 14720.394 - 14821.218: 92.7589% ( 31) 00:07:40.538 14821.218 - 14922.043: 93.0089% ( 28) 00:07:40.538 14922.043 - 15022.868: 93.2321% ( 25) 00:07:40.538 15022.868 - 15123.692: 93.4554% ( 25) 00:07:40.538 15123.692 - 15224.517: 93.7321% ( 31) 00:07:40.538 15224.517 - 15325.342: 94.1071% ( 42) 00:07:40.538 15325.342 - 15426.166: 94.3304% ( 25) 00:07:40.538 15426.166 - 15526.991: 94.5268% ( 22) 00:07:40.538 15526.991 - 15627.815: 94.7679% ( 27) 00:07:40.538 15627.815 - 15728.640: 95.1786% ( 46) 00:07:40.538 15728.640 - 15829.465: 95.5268% ( 39) 00:07:40.538 15829.465 - 15930.289: 95.9107% ( 43) 00:07:40.538 15930.289 - 16031.114: 96.3571% ( 50) 00:07:40.538 16031.114 - 16131.938: 96.7321% ( 42) 00:07:40.538 16131.938 - 16232.763: 97.0179% ( 32) 00:07:40.538 16232.763 - 16333.588: 97.2679% ( 28) 00:07:40.538 16333.588 - 16434.412: 97.5179% ( 28) 00:07:40.538 16434.412 - 16535.237: 97.7589% ( 27) 00:07:40.538 16535.237 - 16636.062: 98.0357% ( 31) 00:07:40.538 16636.062 - 16736.886: 98.1964% ( 18) 00:07:40.538 16736.886 - 16837.711: 98.3125% ( 13) 00:07:40.538 16837.711 - 16938.535: 98.4196% ( 12) 00:07:40.538 16938.535 - 17039.360: 98.5179% ( 11) 00:07:40.538 17039.360 - 17140.185: 98.6250% ( 12) 00:07:40.538 17140.185 - 17241.009: 98.7679% ( 16) 00:07:40.538 17241.009 - 17341.834: 98.9107% ( 16) 00:07:40.538 17341.834 - 17442.658: 99.0000% ( 10) 00:07:40.538 17442.658 - 17543.483: 99.0446% ( 5) 00:07:40.538 17543.483 - 17644.308: 99.0804% ( 4) 00:07:40.538 17644.308 - 17745.132: 99.1161% ( 4) 00:07:40.538 17745.132 - 17845.957: 99.1607% ( 5) 00:07:40.538 17845.957 - 17946.782: 99.2054% ( 5) 00:07:40.538 17946.782 - 18047.606: 99.2500% ( 5) 00:07:40.538 18047.606 - 18148.431: 99.2857% ( 4) 00:07:40.538 18148.431 - 18249.255: 99.3304% ( 5) 00:07:40.539 18249.255 - 18350.080: 99.3750% ( 5) 00:07:40.539 18350.080 - 18450.905: 99.4196% ( 5) 00:07:40.539 18450.905 - 18551.729: 99.4286% ( 1) 00:07:40.539 23189.662 - 23290.486: 99.4554% ( 3) 00:07:40.539 23290.486 - 23391.311: 99.4821% ( 3) 00:07:40.539 23391.311 - 23492.135: 99.5000% ( 2) 00:07:40.539 23492.135 - 23592.960: 99.5268% ( 3) 00:07:40.539 23592.960 - 23693.785: 99.5536% ( 3) 00:07:40.539 23693.785 - 23794.609: 99.5714% ( 2) 00:07:40.539 23794.609 - 23895.434: 99.5982% ( 3) 00:07:40.539 23895.434 - 23996.258: 99.6250% ( 3) 00:07:40.539 23996.258 - 24097.083: 99.6518% ( 3) 00:07:40.539 24097.083 - 24197.908: 99.6786% ( 3) 00:07:40.539 24197.908 - 24298.732: 99.7054% ( 3) 00:07:40.539 24298.732 - 24399.557: 99.7232% ( 2) 00:07:40.539 24399.557 - 24500.382: 99.7500% ( 3) 00:07:40.539 24500.382 - 24601.206: 99.7768% ( 3) 00:07:40.539 24601.206 - 24702.031: 99.8036% ( 3) 00:07:40.539 24702.031 - 24802.855: 99.8304% ( 3) 00:07:40.539 24802.855 - 24903.680: 99.8571% ( 3) 00:07:40.539 24903.680 - 25004.505: 99.8839% ( 3) 00:07:40.539 25004.505 - 25105.329: 99.9196% ( 4) 00:07:40.539 25105.329 - 25206.154: 99.9464% ( 3) 00:07:40.539 25206.154 - 25306.978: 99.9732% ( 3) 00:07:40.539 25306.978 - 25407.803: 100.0000% ( 3) 00:07:40.539 00:07:40.539 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:40.539 ============================================================================== 00:07:40.539 Range in us Cumulative IO count 00:07:40.539 6956.898 - 7007.311: 0.0179% ( 2) 00:07:40.539 7007.311 - 7057.723: 0.0714% ( 6) 00:07:40.539 7057.723 - 7108.135: 0.1161% ( 5) 00:07:40.539 7108.135 - 7158.548: 0.4732% ( 40) 00:07:40.539 7158.548 - 7208.960: 0.5000% ( 3) 00:07:40.539 7208.960 - 7259.372: 0.5625% ( 7) 00:07:40.539 7259.372 - 7309.785: 0.6696% ( 12) 00:07:40.539 7309.785 - 7360.197: 0.9821% ( 35) 00:07:40.539 7360.197 - 7410.609: 1.1250% ( 16) 00:07:40.539 7410.609 - 7461.022: 1.1964% ( 8) 00:07:40.539 7461.022 - 7511.434: 1.2589% ( 7) 00:07:40.539 7511.434 - 7561.846: 1.3929% ( 15) 00:07:40.539 7561.846 - 7612.258: 1.6339% ( 27) 00:07:40.539 7612.258 - 7662.671: 1.9375% ( 34) 00:07:40.539 7662.671 - 7713.083: 2.2321% ( 33) 00:07:40.539 7713.083 - 7763.495: 2.7500% ( 58) 00:07:40.539 7763.495 - 7813.908: 3.0625% ( 35) 00:07:40.539 7813.908 - 7864.320: 3.4375% ( 42) 00:07:40.539 7864.320 - 7914.732: 3.9018% ( 52) 00:07:40.539 7914.732 - 7965.145: 4.1429% ( 27) 00:07:40.539 7965.145 - 8015.557: 4.3571% ( 24) 00:07:40.539 8015.557 - 8065.969: 4.5536% ( 22) 00:07:40.539 8065.969 - 8116.382: 4.7321% ( 20) 00:07:40.539 8116.382 - 8166.794: 4.9196% ( 21) 00:07:40.539 8166.794 - 8217.206: 5.0446% ( 14) 00:07:40.539 8217.206 - 8267.618: 5.2054% ( 18) 00:07:40.539 8267.618 - 8318.031: 5.3125% ( 12) 00:07:40.539 8318.031 - 8368.443: 5.4643% ( 17) 00:07:40.539 8368.443 - 8418.855: 5.6071% ( 16) 00:07:40.539 8418.855 - 8469.268: 5.8393% ( 26) 00:07:40.539 8469.268 - 8519.680: 5.9554% ( 13) 00:07:40.539 8519.680 - 8570.092: 6.0893% ( 15) 00:07:40.539 8570.092 - 8620.505: 6.2143% ( 14) 00:07:40.539 8620.505 - 8670.917: 6.4464% ( 26) 00:07:40.539 8670.917 - 8721.329: 6.5893% ( 16) 00:07:40.539 8721.329 - 8771.742: 6.8036% ( 24) 00:07:40.539 8771.742 - 8822.154: 7.4821% ( 76) 00:07:40.539 8822.154 - 8872.566: 7.7768% ( 33) 00:07:40.539 8872.566 - 8922.978: 8.0714% ( 33) 00:07:40.539 8922.978 - 8973.391: 8.5714% ( 56) 00:07:40.539 8973.391 - 9023.803: 9.0893% ( 58) 00:07:40.539 9023.803 - 9074.215: 9.6696% ( 65) 00:07:40.539 9074.215 - 9124.628: 10.4643% ( 89) 00:07:40.539 9124.628 - 9175.040: 11.0536% ( 66) 00:07:40.539 9175.040 - 9225.452: 11.5357% ( 54) 00:07:40.539 9225.452 - 9275.865: 12.0179% ( 54) 00:07:40.539 9275.865 - 9326.277: 12.5000% ( 54) 00:07:40.539 9326.277 - 9376.689: 12.9732% ( 53) 00:07:40.539 9376.689 - 9427.102: 13.6964% ( 81) 00:07:40.539 9427.102 - 9477.514: 14.1964% ( 56) 00:07:40.539 9477.514 - 9527.926: 14.9196% ( 81) 00:07:40.539 9527.926 - 9578.338: 15.6696% ( 84) 00:07:40.539 9578.338 - 9628.751: 16.4732% ( 90) 00:07:40.539 9628.751 - 9679.163: 17.4375% ( 108) 00:07:40.539 9679.163 - 9729.575: 18.4375% ( 112) 00:07:40.539 9729.575 - 9779.988: 19.4196% ( 110) 00:07:40.539 9779.988 - 9830.400: 20.4107% ( 111) 00:07:40.539 9830.400 - 9880.812: 21.2679% ( 96) 00:07:40.539 9880.812 - 9931.225: 22.0446% ( 87) 00:07:40.539 9931.225 - 9981.637: 22.9375% ( 100) 00:07:40.539 9981.637 - 10032.049: 23.7321% ( 89) 00:07:40.539 10032.049 - 10082.462: 24.4018% ( 75) 00:07:40.539 10082.462 - 10132.874: 25.1161% ( 80) 00:07:40.539 10132.874 - 10183.286: 25.7589% ( 72) 00:07:40.539 10183.286 - 10233.698: 26.4196% ( 74) 00:07:40.539 10233.698 - 10284.111: 27.1607% ( 83) 00:07:40.539 10284.111 - 10334.523: 27.9911% ( 93) 00:07:40.539 10334.523 - 10384.935: 28.7143% ( 81) 00:07:40.539 10384.935 - 10435.348: 29.3929% ( 76) 00:07:40.539 10435.348 - 10485.760: 30.0714% ( 76) 00:07:40.539 10485.760 - 10536.172: 30.7946% ( 81) 00:07:40.539 10536.172 - 10586.585: 31.4911% ( 78) 00:07:40.539 10586.585 - 10636.997: 32.1250% ( 71) 00:07:40.539 10636.997 - 10687.409: 33.0089% ( 99) 00:07:40.539 10687.409 - 10737.822: 34.0000% ( 111) 00:07:40.539 10737.822 - 10788.234: 35.0714% ( 120) 00:07:40.539 10788.234 - 10838.646: 35.9911% ( 103) 00:07:40.539 10838.646 - 10889.058: 37.1339% ( 128) 00:07:40.539 10889.058 - 10939.471: 38.5446% ( 158) 00:07:40.539 10939.471 - 10989.883: 40.0893% ( 173) 00:07:40.539 10989.883 - 11040.295: 41.3929% ( 146) 00:07:40.539 11040.295 - 11090.708: 43.1161% ( 193) 00:07:40.539 11090.708 - 11141.120: 45.0179% ( 213) 00:07:40.539 11141.120 - 11191.532: 46.9196% ( 213) 00:07:40.539 11191.532 - 11241.945: 48.6161% ( 190) 00:07:40.539 11241.945 - 11292.357: 50.0714% ( 163) 00:07:40.539 11292.357 - 11342.769: 51.5446% ( 165) 00:07:40.539 11342.769 - 11393.182: 53.2679% ( 193) 00:07:40.539 11393.182 - 11443.594: 55.2321% ( 220) 00:07:40.539 11443.594 - 11494.006: 56.9643% ( 194) 00:07:40.539 11494.006 - 11544.418: 58.7500% ( 200) 00:07:40.539 11544.418 - 11594.831: 60.2500% ( 168) 00:07:40.539 11594.831 - 11645.243: 61.6518% ( 157) 00:07:40.539 11645.243 - 11695.655: 63.1339% ( 166) 00:07:40.539 11695.655 - 11746.068: 64.7143% ( 177) 00:07:40.539 11746.068 - 11796.480: 65.9821% ( 142) 00:07:40.539 11796.480 - 11846.892: 67.1518% ( 131) 00:07:40.539 11846.892 - 11897.305: 68.2321% ( 121) 00:07:40.539 11897.305 - 11947.717: 69.2589% ( 115) 00:07:40.539 11947.717 - 11998.129: 70.0625% ( 90) 00:07:40.539 11998.129 - 12048.542: 70.9375% ( 98) 00:07:40.539 12048.542 - 12098.954: 71.7232% ( 88) 00:07:40.539 12098.954 - 12149.366: 72.6161% ( 100) 00:07:40.539 12149.366 - 12199.778: 73.6875% ( 120) 00:07:40.539 12199.778 - 12250.191: 74.4643% ( 87) 00:07:40.539 12250.191 - 12300.603: 75.3839% ( 103) 00:07:40.539 12300.603 - 12351.015: 76.2321% ( 95) 00:07:40.539 12351.015 - 12401.428: 76.8929% ( 74) 00:07:40.539 12401.428 - 12451.840: 77.6696% ( 87) 00:07:40.539 12451.840 - 12502.252: 78.4196% ( 84) 00:07:40.539 12502.252 - 12552.665: 79.1607% ( 83) 00:07:40.539 12552.665 - 12603.077: 79.8214% ( 74) 00:07:40.539 12603.077 - 12653.489: 80.4286% ( 68) 00:07:40.539 12653.489 - 12703.902: 81.2143% ( 88) 00:07:40.539 12703.902 - 12754.314: 81.6964% ( 54) 00:07:40.539 12754.314 - 12804.726: 82.3036% ( 68) 00:07:40.539 12804.726 - 12855.138: 82.7411% ( 49) 00:07:40.539 12855.138 - 12905.551: 83.2321% ( 55) 00:07:40.539 12905.551 - 13006.375: 84.2500% ( 114) 00:07:40.539 13006.375 - 13107.200: 84.8125% ( 63) 00:07:40.539 13107.200 - 13208.025: 85.3571% ( 61) 00:07:40.540 13208.025 - 13308.849: 85.9018% ( 61) 00:07:40.540 13308.849 - 13409.674: 86.4464% ( 61) 00:07:40.540 13409.674 - 13510.498: 86.8036% ( 40) 00:07:40.540 13510.498 - 13611.323: 87.1339% ( 37) 00:07:40.540 13611.323 - 13712.148: 87.8929% ( 85) 00:07:40.540 13712.148 - 13812.972: 88.2411% ( 39) 00:07:40.540 13812.972 - 13913.797: 88.8750% ( 71) 00:07:40.540 13913.797 - 14014.622: 89.4375% ( 63) 00:07:40.540 14014.622 - 14115.446: 90.1339% ( 78) 00:07:40.540 14115.446 - 14216.271: 90.7768% ( 72) 00:07:40.540 14216.271 - 14317.095: 91.3036% ( 59) 00:07:40.540 14317.095 - 14417.920: 91.7500% ( 50) 00:07:40.540 14417.920 - 14518.745: 92.1161% ( 41) 00:07:40.540 14518.745 - 14619.569: 92.4375% ( 36) 00:07:40.540 14619.569 - 14720.394: 92.6964% ( 29) 00:07:40.540 14720.394 - 14821.218: 92.9375% ( 27) 00:07:40.540 14821.218 - 14922.043: 93.1786% ( 27) 00:07:40.540 14922.043 - 15022.868: 93.5000% ( 36) 00:07:40.540 15022.868 - 15123.692: 93.7321% ( 26) 00:07:40.540 15123.692 - 15224.517: 93.9375% ( 23) 00:07:40.540 15224.517 - 15325.342: 94.1250% ( 21) 00:07:40.540 15325.342 - 15426.166: 94.3661% ( 27) 00:07:40.540 15426.166 - 15526.991: 94.6071% ( 27) 00:07:40.540 15526.991 - 15627.815: 94.8304% ( 25) 00:07:40.540 15627.815 - 15728.640: 95.0804% ( 28) 00:07:40.540 15728.640 - 15829.465: 95.4375% ( 40) 00:07:40.540 15829.465 - 15930.289: 95.7232% ( 32) 00:07:40.540 15930.289 - 16031.114: 96.0982% ( 42) 00:07:40.540 16031.114 - 16131.938: 96.6161% ( 58) 00:07:40.540 16131.938 - 16232.763: 96.9821% ( 41) 00:07:40.540 16232.763 - 16333.588: 97.2946% ( 35) 00:07:40.540 16333.588 - 16434.412: 97.6161% ( 36) 00:07:40.540 16434.412 - 16535.237: 97.8571% ( 27) 00:07:40.540 16535.237 - 16636.062: 98.0446% ( 21) 00:07:40.540 16636.062 - 16736.886: 98.2321% ( 21) 00:07:40.540 16736.886 - 16837.711: 98.4375% ( 23) 00:07:40.540 16837.711 - 16938.535: 98.6161% ( 20) 00:07:40.540 16938.535 - 17039.360: 98.7411% ( 14) 00:07:40.540 17039.360 - 17140.185: 98.8571% ( 13) 00:07:40.540 17140.185 - 17241.009: 99.0089% ( 17) 00:07:40.540 17241.009 - 17341.834: 99.1250% ( 13) 00:07:40.540 17341.834 - 17442.658: 99.2054% ( 9) 00:07:40.540 17442.658 - 17543.483: 99.2857% ( 9) 00:07:40.540 17543.483 - 17644.308: 99.3214% ( 4) 00:07:40.540 17644.308 - 17745.132: 99.3393% ( 2) 00:07:40.540 17745.132 - 17845.957: 99.3661% ( 3) 00:07:40.540 17845.957 - 17946.782: 99.3929% ( 3) 00:07:40.540 17946.782 - 18047.606: 99.4196% ( 3) 00:07:40.540 18047.606 - 18148.431: 99.4286% ( 1) 00:07:40.540 22685.538 - 22786.363: 99.4464% ( 2) 00:07:40.540 22786.363 - 22887.188: 99.4732% ( 3) 00:07:40.540 22887.188 - 22988.012: 99.5000% ( 3) 00:07:40.540 22988.012 - 23088.837: 99.5357% ( 4) 00:07:40.540 23088.837 - 23189.662: 99.5536% ( 2) 00:07:40.540 23189.662 - 23290.486: 99.5804% ( 3) 00:07:40.540 23290.486 - 23391.311: 99.5982% ( 2) 00:07:40.540 23391.311 - 23492.135: 99.6250% ( 3) 00:07:40.540 23492.135 - 23592.960: 99.6518% ( 3) 00:07:40.540 23592.960 - 23693.785: 99.6875% ( 4) 00:07:40.540 23693.785 - 23794.609: 99.7143% ( 3) 00:07:40.540 23794.609 - 23895.434: 99.7411% ( 3) 00:07:40.540 23895.434 - 23996.258: 99.7679% ( 3) 00:07:40.540 23996.258 - 24097.083: 99.7946% ( 3) 00:07:40.540 24097.083 - 24197.908: 99.8214% ( 3) 00:07:40.540 24197.908 - 24298.732: 99.8571% ( 4) 00:07:40.540 24298.732 - 24399.557: 99.8839% ( 3) 00:07:40.540 24399.557 - 24500.382: 99.9018% ( 2) 00:07:40.540 24500.382 - 24601.206: 99.9286% ( 3) 00:07:40.540 24601.206 - 24702.031: 99.9554% ( 3) 00:07:40.540 24702.031 - 24802.855: 99.9911% ( 4) 00:07:40.540 24802.855 - 24903.680: 100.0000% ( 1) 00:07:40.540 00:07:40.540 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:40.540 ============================================================================== 00:07:40.540 Range in us Cumulative IO count 00:07:40.540 6755.249 - 6805.662: 0.0089% ( 1) 00:07:40.540 6856.074 - 6906.486: 0.0178% ( 1) 00:07:40.540 6906.486 - 6956.898: 0.0266% ( 1) 00:07:40.540 7007.311 - 7057.723: 0.0533% ( 3) 00:07:40.540 7057.723 - 7108.135: 0.1065% ( 6) 00:07:40.540 7108.135 - 7158.548: 0.1776% ( 8) 00:07:40.540 7158.548 - 7208.960: 0.4705% ( 33) 00:07:40.540 7208.960 - 7259.372: 0.5682% ( 11) 00:07:40.540 7259.372 - 7309.785: 0.6570% ( 10) 00:07:40.540 7309.785 - 7360.197: 0.9055% ( 28) 00:07:40.540 7360.197 - 7410.609: 1.0121% ( 12) 00:07:40.540 7410.609 - 7461.022: 1.1452% ( 15) 00:07:40.540 7461.022 - 7511.434: 1.2251% ( 9) 00:07:40.540 7511.434 - 7561.846: 1.7401% ( 58) 00:07:40.540 7561.846 - 7612.258: 1.9442% ( 23) 00:07:40.540 7612.258 - 7662.671: 2.2195% ( 31) 00:07:40.540 7662.671 - 7713.083: 2.6811% ( 52) 00:07:40.540 7713.083 - 7763.495: 3.0895% ( 46) 00:07:40.540 7763.495 - 7813.908: 3.4091% ( 36) 00:07:40.540 7813.908 - 7864.320: 3.7642% ( 40) 00:07:40.540 7864.320 - 7914.732: 4.1548% ( 44) 00:07:40.540 7914.732 - 7965.145: 4.6520% ( 56) 00:07:40.540 7965.145 - 8015.557: 5.0160% ( 41) 00:07:40.540 8015.557 - 8065.969: 5.2113% ( 22) 00:07:40.540 8065.969 - 8116.382: 5.3001% ( 10) 00:07:40.540 8116.382 - 8166.794: 5.3977% ( 11) 00:07:40.540 8166.794 - 8217.206: 5.4599% ( 7) 00:07:40.540 8217.206 - 8267.618: 5.5309% ( 8) 00:07:40.540 8267.618 - 8318.031: 5.5930% ( 7) 00:07:40.540 8318.031 - 8368.443: 5.6818% ( 10) 00:07:40.540 8368.443 - 8418.855: 5.7706% ( 10) 00:07:40.540 8418.855 - 8469.268: 5.9038% ( 15) 00:07:40.540 8469.268 - 8519.680: 6.2500% ( 39) 00:07:40.540 8519.680 - 8570.092: 6.3565% ( 12) 00:07:40.540 8570.092 - 8620.505: 6.4364% ( 9) 00:07:40.540 8620.505 - 8670.917: 6.6495% ( 24) 00:07:40.540 8670.917 - 8721.329: 6.8537% ( 23) 00:07:40.540 8721.329 - 8771.742: 7.0579% ( 23) 00:07:40.540 8771.742 - 8822.154: 7.4840% ( 48) 00:07:40.540 8822.154 - 8872.566: 7.7947% ( 35) 00:07:40.540 8872.566 - 8922.978: 8.2209% ( 48) 00:07:40.540 8922.978 - 8973.391: 8.7891% ( 64) 00:07:40.540 8973.391 - 9023.803: 9.2773% ( 55) 00:07:40.540 9023.803 - 9074.215: 9.7923% ( 58) 00:07:40.540 9074.215 - 9124.628: 10.2273% ( 49) 00:07:40.540 9124.628 - 9175.040: 10.6445% ( 47) 00:07:40.540 9175.040 - 9225.452: 11.1772% ( 60) 00:07:40.540 9225.452 - 9275.865: 11.6477% ( 53) 00:07:40.540 9275.865 - 9326.277: 12.2425% ( 67) 00:07:40.540 9326.277 - 9376.689: 12.8462% ( 68) 00:07:40.540 9376.689 - 9427.102: 13.6541% ( 91) 00:07:40.540 9427.102 - 9477.514: 14.4531% ( 90) 00:07:40.540 9477.514 - 9527.926: 15.2344% ( 88) 00:07:40.540 9527.926 - 9578.338: 15.8292% ( 67) 00:07:40.540 9578.338 - 9628.751: 16.5039% ( 76) 00:07:40.540 9628.751 - 9679.163: 17.2585% ( 85) 00:07:40.540 9679.163 - 9729.575: 18.0220% ( 86) 00:07:40.540 9729.575 - 9779.988: 18.7500% ( 82) 00:07:40.540 9779.988 - 9830.400: 19.4602% ( 80) 00:07:40.540 9830.400 - 9880.812: 20.3924% ( 105) 00:07:40.540 9880.812 - 9931.225: 21.1470% ( 85) 00:07:40.540 9931.225 - 9981.637: 21.7418% ( 67) 00:07:40.540 9981.637 - 10032.049: 22.3544% ( 69) 00:07:40.540 10032.049 - 10082.462: 22.9137% ( 63) 00:07:40.540 10082.462 - 10132.874: 23.4464% ( 60) 00:07:40.540 10132.874 - 10183.286: 24.1477% ( 79) 00:07:40.540 10183.286 - 10233.698: 24.6715% ( 59) 00:07:40.540 10233.698 - 10284.111: 25.2841% ( 69) 00:07:40.540 10284.111 - 10334.523: 25.9411% ( 74) 00:07:40.540 10334.523 - 10384.935: 26.8288% ( 100) 00:07:40.540 10384.935 - 10435.348: 27.7344% ( 102) 00:07:40.540 10435.348 - 10485.760: 28.9062% ( 132) 00:07:40.540 10485.760 - 10536.172: 30.0870% ( 133) 00:07:40.540 10536.172 - 10586.585: 31.4719% ( 156) 00:07:40.540 10586.585 - 10636.997: 32.8036% ( 150) 00:07:40.540 10636.997 - 10687.409: 34.0909% ( 145) 00:07:40.540 10687.409 - 10737.822: 35.5558% ( 165) 00:07:40.540 10737.822 - 10788.234: 36.6388% ( 122) 00:07:40.540 10788.234 - 10838.646: 37.6953% ( 119) 00:07:40.540 10838.646 - 10889.058: 38.8317% ( 128) 00:07:40.540 10889.058 - 10939.471: 40.0923% ( 142) 00:07:40.540 10939.471 - 10989.883: 40.9801% ( 100) 00:07:40.540 10989.883 - 11040.295: 41.9478% ( 109) 00:07:40.540 11040.295 - 11090.708: 43.1818% ( 139) 00:07:40.540 11090.708 - 11141.120: 44.5312% ( 152) 00:07:40.540 11141.120 - 11191.532: 46.1204% ( 179) 00:07:40.540 11191.532 - 11241.945: 47.7805% ( 187) 00:07:40.540 11241.945 - 11292.357: 49.6094% ( 206) 00:07:40.540 11292.357 - 11342.769: 51.5625% ( 220) 00:07:40.540 11342.769 - 11393.182: 53.2049% ( 185) 00:07:40.540 11393.182 - 11443.594: 54.7230% ( 171) 00:07:40.540 11443.594 - 11494.006: 56.3388% ( 182) 00:07:40.540 11494.006 - 11544.418: 57.9989% ( 187) 00:07:40.540 11544.418 - 11594.831: 59.5526% ( 175) 00:07:40.540 11594.831 - 11645.243: 61.3015% ( 197) 00:07:40.540 11645.243 - 11695.655: 63.1481% ( 208) 00:07:40.540 11695.655 - 11746.068: 64.7727% ( 183) 00:07:40.540 11746.068 - 11796.480: 66.5039% ( 195) 00:07:40.540 11796.480 - 11846.892: 67.7024% ( 135) 00:07:40.540 11846.892 - 11897.305: 68.9364% ( 139) 00:07:40.540 11897.305 - 11947.717: 69.8597% ( 104) 00:07:40.540 11947.717 - 11998.129: 70.7653% ( 102) 00:07:40.540 11998.129 - 12048.542: 71.7330% ( 109) 00:07:40.540 12048.542 - 12098.954: 72.4609% ( 82) 00:07:40.540 12098.954 - 12149.366: 73.1800% ( 81) 00:07:40.540 12149.366 - 12199.778: 73.9435% ( 86) 00:07:40.540 12199.778 - 12250.191: 74.6715% ( 82) 00:07:40.540 12250.191 - 12300.603: 75.4528% ( 88) 00:07:40.540 12300.603 - 12351.015: 76.1097% ( 74) 00:07:40.540 12351.015 - 12401.428: 76.7933% ( 77) 00:07:40.541 12401.428 - 12451.840: 77.6012% ( 91) 00:07:40.541 12451.840 - 12502.252: 78.5334% ( 105) 00:07:40.541 12502.252 - 12552.665: 79.2081% ( 76) 00:07:40.541 12552.665 - 12603.077: 79.9361% ( 82) 00:07:40.541 12603.077 - 12653.489: 80.5309% ( 67) 00:07:40.541 12653.489 - 12703.902: 81.1257% ( 67) 00:07:40.541 12703.902 - 12754.314: 81.7472% ( 70) 00:07:40.541 12754.314 - 12804.726: 82.5107% ( 86) 00:07:40.541 12804.726 - 12855.138: 83.2564% ( 84) 00:07:40.541 12855.138 - 12905.551: 83.8068% ( 62) 00:07:40.541 12905.551 - 13006.375: 84.6857% ( 99) 00:07:40.541 13006.375 - 13107.200: 85.3960% ( 80) 00:07:40.541 13107.200 - 13208.025: 85.9286% ( 60) 00:07:40.541 13208.025 - 13308.849: 86.4169% ( 55) 00:07:40.541 13308.849 - 13409.674: 87.0472% ( 71) 00:07:40.541 13409.674 - 13510.498: 87.6065% ( 63) 00:07:40.541 13510.498 - 13611.323: 88.3700% ( 86) 00:07:40.541 13611.323 - 13712.148: 88.9826% ( 69) 00:07:40.541 13712.148 - 13812.972: 89.5330% ( 62) 00:07:40.541 13812.972 - 13913.797: 90.1900% ( 74) 00:07:40.541 13913.797 - 14014.622: 90.8026% ( 69) 00:07:40.541 14014.622 - 14115.446: 91.3352% ( 60) 00:07:40.541 14115.446 - 14216.271: 91.8235% ( 55) 00:07:40.541 14216.271 - 14317.095: 92.3473% ( 59) 00:07:40.541 14317.095 - 14417.920: 92.8001% ( 51) 00:07:40.541 14417.920 - 14518.745: 93.1019% ( 34) 00:07:40.541 14518.745 - 14619.569: 93.4215% ( 36) 00:07:40.541 14619.569 - 14720.394: 93.6701% ( 28) 00:07:40.541 14720.394 - 14821.218: 93.9631% ( 33) 00:07:40.541 14821.218 - 14922.043: 94.2294% ( 30) 00:07:40.541 14922.043 - 15022.868: 94.4602% ( 26) 00:07:40.541 15022.868 - 15123.692: 94.7266% ( 30) 00:07:40.541 15123.692 - 15224.517: 94.8686% ( 16) 00:07:40.541 15224.517 - 15325.342: 95.0107% ( 16) 00:07:40.541 15325.342 - 15426.166: 95.1793% ( 19) 00:07:40.541 15426.166 - 15526.991: 95.4279% ( 28) 00:07:40.541 15526.991 - 15627.815: 95.7653% ( 38) 00:07:40.541 15627.815 - 15728.640: 96.1559% ( 44) 00:07:40.541 15728.640 - 15829.465: 96.5465% ( 44) 00:07:40.541 15829.465 - 15930.289: 96.9016% ( 40) 00:07:40.541 15930.289 - 16031.114: 97.2212% ( 36) 00:07:40.541 16031.114 - 16131.938: 97.5941% ( 42) 00:07:40.541 16131.938 - 16232.763: 97.8782% ( 32) 00:07:40.541 16232.763 - 16333.588: 98.0735% ( 22) 00:07:40.541 16333.588 - 16434.412: 98.2777% ( 23) 00:07:40.541 16434.412 - 16535.237: 98.4730% ( 22) 00:07:40.541 16535.237 - 16636.062: 98.7393% ( 30) 00:07:40.541 16636.062 - 16736.886: 98.9524% ( 24) 00:07:40.541 16736.886 - 16837.711: 99.1921% ( 27) 00:07:40.541 16837.711 - 16938.535: 99.4052% ( 24) 00:07:40.541 16938.535 - 17039.360: 99.5916% ( 21) 00:07:40.541 17039.360 - 17140.185: 99.7070% ( 13) 00:07:40.541 17140.185 - 17241.009: 99.8402% ( 15) 00:07:40.541 17241.009 - 17341.834: 99.9556% ( 13) 00:07:40.541 17341.834 - 17442.658: 99.9911% ( 4) 00:07:40.541 17442.658 - 17543.483: 100.0000% ( 1) 00:07:40.541 00:07:40.541 14:44:25 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:40.541 ************************************ 00:07:40.541 END TEST nvme_perf 00:07:40.541 ************************************ 00:07:40.541 00:07:40.541 real 0m2.519s 00:07:40.541 user 0m2.191s 00:07:40.541 sys 0m0.197s 00:07:40.541 14:44:25 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.541 14:44:25 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:40.541 14:44:25 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:40.541 14:44:25 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:40.541 14:44:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.541 14:44:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.541 ************************************ 00:07:40.541 START TEST nvme_hello_world 00:07:40.541 ************************************ 00:07:40.541 14:44:25 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:40.541 Initializing NVMe Controllers 00:07:40.541 Attached to 0000:00:11.0 00:07:40.541 Namespace ID: 1 size: 5GB 00:07:40.541 Attached to 0000:00:13.0 00:07:40.541 Namespace ID: 1 size: 1GB 00:07:40.541 Attached to 0000:00:10.0 00:07:40.541 Namespace ID: 1 size: 6GB 00:07:40.541 Attached to 0000:00:12.0 00:07:40.541 Namespace ID: 1 size: 4GB 00:07:40.541 Namespace ID: 2 size: 4GB 00:07:40.541 Namespace ID: 3 size: 4GB 00:07:40.541 Initialization complete. 00:07:40.541 INFO: using host memory buffer for IO 00:07:40.541 Hello world! 00:07:40.541 INFO: using host memory buffer for IO 00:07:40.541 Hello world! 00:07:40.541 INFO: using host memory buffer for IO 00:07:40.541 Hello world! 00:07:40.541 INFO: using host memory buffer for IO 00:07:40.541 Hello world! 00:07:40.541 INFO: using host memory buffer for IO 00:07:40.541 Hello world! 00:07:40.541 INFO: using host memory buffer for IO 00:07:40.541 Hello world! 00:07:40.541 ************************************ 00:07:40.541 END TEST nvme_hello_world 00:07:40.541 ************************************ 00:07:40.541 00:07:40.541 real 0m0.220s 00:07:40.541 user 0m0.090s 00:07:40.541 sys 0m0.088s 00:07:40.541 14:44:26 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.541 14:44:26 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:40.541 14:44:26 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:40.541 14:44:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.541 14:44:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.541 14:44:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.541 ************************************ 00:07:40.541 START TEST nvme_sgl 00:07:40.541 ************************************ 00:07:40.541 14:44:26 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:40.800 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:40.800 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:40.800 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:40.800 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:40.800 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:40.800 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:40.800 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:40.800 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:40.800 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:40.800 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:40.800 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:40.800 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:40.800 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:40.800 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:40.800 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:40.800 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:40.800 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:40.800 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:40.800 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:40.800 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:40.800 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:40.800 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:40.800 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:40.800 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:40.800 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:40.800 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:40.800 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:40.800 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:40.800 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:40.800 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:40.800 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:40.800 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:40.800 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:40.800 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:40.800 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:40.800 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:40.800 NVMe Readv/Writev Request test 00:07:40.800 Attached to 0000:00:11.0 00:07:40.800 Attached to 0000:00:13.0 00:07:40.800 Attached to 0000:00:10.0 00:07:40.800 Attached to 0000:00:12.0 00:07:40.800 0000:00:11.0: build_io_request_2 test passed 00:07:40.800 0000:00:11.0: build_io_request_4 test passed 00:07:40.800 0000:00:11.0: build_io_request_5 test passed 00:07:40.800 0000:00:11.0: build_io_request_6 test passed 00:07:40.800 0000:00:11.0: build_io_request_7 test passed 00:07:40.800 0000:00:11.0: build_io_request_10 test passed 00:07:40.800 0000:00:10.0: build_io_request_2 test passed 00:07:40.800 0000:00:10.0: build_io_request_4 test passed 00:07:40.800 0000:00:10.0: build_io_request_5 test passed 00:07:40.800 0000:00:10.0: build_io_request_6 test passed 00:07:40.800 0000:00:10.0: build_io_request_7 test passed 00:07:40.800 0000:00:10.0: build_io_request_10 test passed 00:07:40.800 Cleaning up... 00:07:40.800 00:07:40.800 real 0m0.287s 00:07:40.800 user 0m0.145s 00:07:40.800 sys 0m0.096s 00:07:40.800 14:44:26 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.800 14:44:26 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:40.800 ************************************ 00:07:40.800 END TEST nvme_sgl 00:07:40.800 ************************************ 00:07:41.059 14:44:26 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:41.059 14:44:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.059 14:44:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.059 14:44:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.059 ************************************ 00:07:41.059 START TEST nvme_e2edp 00:07:41.059 ************************************ 00:07:41.059 14:44:26 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:41.059 NVMe Write/Read with End-to-End data protection test 00:07:41.059 Attached to 0000:00:11.0 00:07:41.059 Attached to 0000:00:13.0 00:07:41.059 Attached to 0000:00:10.0 00:07:41.059 Attached to 0000:00:12.0 00:07:41.059 Cleaning up... 00:07:41.059 00:07:41.059 real 0m0.192s 00:07:41.059 user 0m0.069s 00:07:41.059 sys 0m0.090s 00:07:41.059 14:44:26 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.059 14:44:26 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:41.059 ************************************ 00:07:41.059 END TEST nvme_e2edp 00:07:41.059 ************************************ 00:07:41.059 14:44:26 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:41.059 14:44:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.059 14:44:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.059 14:44:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.059 ************************************ 00:07:41.059 START TEST nvme_reserve 00:07:41.059 ************************************ 00:07:41.059 14:44:26 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:41.318 ===================================================== 00:07:41.318 NVMe Controller at PCI bus 0, device 17, function 0 00:07:41.318 ===================================================== 00:07:41.318 Reservations: Not Supported 00:07:41.318 ===================================================== 00:07:41.318 NVMe Controller at PCI bus 0, device 19, function 0 00:07:41.318 ===================================================== 00:07:41.318 Reservations: Not Supported 00:07:41.318 ===================================================== 00:07:41.318 NVMe Controller at PCI bus 0, device 16, function 0 00:07:41.318 ===================================================== 00:07:41.318 Reservations: Not Supported 00:07:41.318 ===================================================== 00:07:41.318 NVMe Controller at PCI bus 0, device 18, function 0 00:07:41.318 ===================================================== 00:07:41.318 Reservations: Not Supported 00:07:41.318 Reservation test passed 00:07:41.318 ************************************ 00:07:41.318 END TEST nvme_reserve 00:07:41.318 ************************************ 00:07:41.318 00:07:41.318 real 0m0.215s 00:07:41.318 user 0m0.070s 00:07:41.318 sys 0m0.097s 00:07:41.318 14:44:26 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.318 14:44:26 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:41.318 14:44:26 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:41.318 14:44:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.318 14:44:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.318 14:44:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.318 ************************************ 00:07:41.318 START TEST nvme_err_injection 00:07:41.318 ************************************ 00:07:41.318 14:44:26 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:41.577 NVMe Error Injection test 00:07:41.577 Attached to 0000:00:11.0 00:07:41.577 Attached to 0000:00:13.0 00:07:41.577 Attached to 0000:00:10.0 00:07:41.577 Attached to 0000:00:12.0 00:07:41.577 0000:00:11.0: get features failed as expected 00:07:41.577 0000:00:13.0: get features failed as expected 00:07:41.577 0000:00:10.0: get features failed as expected 00:07:41.577 0000:00:12.0: get features failed as expected 00:07:41.577 0000:00:11.0: get features successfully as expected 00:07:41.577 0000:00:13.0: get features successfully as expected 00:07:41.577 0000:00:10.0: get features successfully as expected 00:07:41.577 0000:00:12.0: get features successfully as expected 00:07:41.577 0000:00:11.0: read failed as expected 00:07:41.577 0000:00:13.0: read failed as expected 00:07:41.577 0000:00:10.0: read failed as expected 00:07:41.577 0000:00:12.0: read failed as expected 00:07:41.577 0000:00:11.0: read successfully as expected 00:07:41.577 0000:00:13.0: read successfully as expected 00:07:41.577 0000:00:10.0: read successfully as expected 00:07:41.577 0000:00:12.0: read successfully as expected 00:07:41.577 Cleaning up... 00:07:41.577 00:07:41.577 real 0m0.230s 00:07:41.577 user 0m0.093s 00:07:41.577 sys 0m0.090s 00:07:41.577 14:44:27 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.577 ************************************ 00:07:41.577 END TEST nvme_err_injection 00:07:41.577 ************************************ 00:07:41.577 14:44:27 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:41.577 14:44:27 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:41.577 14:44:27 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:41.577 14:44:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.577 14:44:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.577 ************************************ 00:07:41.577 START TEST nvme_overhead 00:07:41.577 ************************************ 00:07:41.577 14:44:27 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:42.952 Initializing NVMe Controllers 00:07:42.952 Attached to 0000:00:11.0 00:07:42.952 Attached to 0000:00:13.0 00:07:42.952 Attached to 0000:00:10.0 00:07:42.952 Attached to 0000:00:12.0 00:07:42.952 Initialization complete. Launching workers. 00:07:42.952 submit (in ns) avg, min, max = 11475.2, 10003.1, 77825.4 00:07:42.952 complete (in ns) avg, min, max = 8439.9, 7327.7, 63420.0 00:07:42.952 00:07:42.952 Submit histogram 00:07:42.952 ================ 00:07:42.952 Range in us Cumulative Count 00:07:42.952 9.994 - 10.043: 0.0057% ( 1) 00:07:42.952 10.092 - 10.142: 0.0114% ( 1) 00:07:42.952 10.191 - 10.240: 0.0170% ( 1) 00:07:42.952 10.240 - 10.289: 0.0227% ( 1) 00:07:42.952 10.289 - 10.338: 0.0284% ( 1) 00:07:42.952 10.486 - 10.535: 0.0341% ( 1) 00:07:42.952 10.880 - 10.929: 0.1590% ( 22) 00:07:42.952 10.929 - 10.978: 1.1924% ( 182) 00:07:42.952 10.978 - 11.028: 5.9448% ( 837) 00:07:42.952 11.028 - 11.077: 17.4143% ( 2020) 00:07:42.952 11.077 - 11.126: 33.2898% ( 2796) 00:07:42.952 11.126 - 11.175: 49.1370% ( 2791) 00:07:42.952 11.175 - 11.225: 59.7490% ( 1869) 00:07:42.952 11.225 - 11.274: 65.5916% ( 1029) 00:07:42.952 11.274 - 11.323: 68.4476% ( 503) 00:07:42.952 11.323 - 11.372: 69.6911% ( 219) 00:07:42.952 11.372 - 11.422: 70.4463% ( 133) 00:07:42.952 11.422 - 11.471: 71.0765% ( 111) 00:07:42.952 11.471 - 11.520: 71.8658% ( 139) 00:07:42.952 11.520 - 11.569: 74.0802% ( 390) 00:07:42.952 11.569 - 11.618: 77.6573% ( 630) 00:07:42.952 11.618 - 11.668: 82.0236% ( 769) 00:07:42.952 11.668 - 11.717: 86.1742% ( 731) 00:07:42.952 11.717 - 11.766: 88.9223% ( 484) 00:07:42.952 11.766 - 11.815: 90.7449% ( 321) 00:07:42.952 11.815 - 11.865: 92.0338% ( 227) 00:07:42.953 11.865 - 11.914: 93.0502% ( 179) 00:07:42.953 11.914 - 11.963: 93.7032% ( 115) 00:07:42.953 11.963 - 12.012: 94.2312% ( 93) 00:07:42.953 12.012 - 12.062: 94.6741% ( 78) 00:07:42.953 12.062 - 12.111: 95.0091% ( 59) 00:07:42.953 12.111 - 12.160: 95.3270% ( 56) 00:07:42.953 12.160 - 12.209: 95.5996% ( 48) 00:07:42.953 12.209 - 12.258: 95.8778% ( 49) 00:07:42.953 12.258 - 12.308: 96.0538% ( 31) 00:07:42.953 12.308 - 12.357: 96.2015% ( 26) 00:07:42.953 12.357 - 12.406: 96.4002% ( 35) 00:07:42.953 12.406 - 12.455: 96.5251% ( 22) 00:07:42.953 12.455 - 12.505: 96.6443% ( 21) 00:07:42.953 12.505 - 12.554: 96.7465% ( 18) 00:07:42.953 12.554 - 12.603: 96.8374% ( 16) 00:07:42.953 12.603 - 12.702: 96.9737% ( 24) 00:07:42.953 12.702 - 12.800: 97.0475% ( 13) 00:07:42.953 12.800 - 12.898: 97.1270% ( 14) 00:07:42.953 12.898 - 12.997: 97.2008% ( 13) 00:07:42.953 12.997 - 13.095: 97.3143% ( 20) 00:07:42.953 13.095 - 13.194: 97.3370% ( 4) 00:07:42.953 13.194 - 13.292: 97.3938% ( 10) 00:07:42.953 13.292 - 13.391: 97.4109% ( 3) 00:07:42.953 13.391 - 13.489: 97.4733% ( 11) 00:07:42.953 13.489 - 13.588: 97.4790% ( 1) 00:07:42.953 13.588 - 13.686: 97.5017% ( 4) 00:07:42.953 13.686 - 13.785: 97.5244% ( 4) 00:07:42.953 13.785 - 13.883: 97.5585% ( 6) 00:07:42.953 13.883 - 13.982: 97.5642% ( 1) 00:07:42.953 13.982 - 14.080: 97.5698% ( 1) 00:07:42.953 14.080 - 14.178: 97.5755% ( 1) 00:07:42.953 14.178 - 14.277: 97.5982% ( 4) 00:07:42.953 14.277 - 14.375: 97.6550% ( 10) 00:07:42.953 14.375 - 14.474: 97.7799% ( 22) 00:07:42.953 14.474 - 14.572: 97.9048% ( 22) 00:07:42.953 14.572 - 14.671: 98.0241% ( 21) 00:07:42.953 14.671 - 14.769: 98.0525% ( 5) 00:07:42.953 14.769 - 14.868: 98.0695% ( 3) 00:07:42.953 14.868 - 14.966: 98.1263% ( 10) 00:07:42.953 14.966 - 15.065: 98.1433% ( 3) 00:07:42.953 15.065 - 15.163: 98.1660% ( 4) 00:07:42.953 15.163 - 15.262: 98.1887% ( 4) 00:07:42.953 15.262 - 15.360: 98.2114% ( 4) 00:07:42.953 15.360 - 15.458: 98.2228% ( 2) 00:07:42.953 15.557 - 15.655: 98.2342% ( 2) 00:07:42.953 15.655 - 15.754: 98.2455% ( 2) 00:07:42.953 15.754 - 15.852: 98.2625% ( 3) 00:07:42.953 15.852 - 15.951: 98.2682% ( 1) 00:07:42.953 15.951 - 16.049: 98.2966% ( 5) 00:07:42.953 16.049 - 16.148: 98.3136% ( 3) 00:07:42.953 16.148 - 16.246: 98.3307% ( 3) 00:07:42.953 16.246 - 16.345: 98.3761% ( 8) 00:07:42.953 16.345 - 16.443: 98.3988% ( 4) 00:07:42.953 16.443 - 16.542: 98.4272% ( 5) 00:07:42.953 16.542 - 16.640: 98.4897% ( 11) 00:07:42.953 16.640 - 16.738: 98.5692% ( 14) 00:07:42.953 16.738 - 16.837: 98.6884% ( 21) 00:07:42.953 16.837 - 16.935: 98.7906% ( 18) 00:07:42.953 16.935 - 17.034: 98.8871% ( 17) 00:07:42.953 17.034 - 17.132: 98.9439% ( 10) 00:07:42.953 17.132 - 17.231: 99.0120% ( 12) 00:07:42.953 17.231 - 17.329: 99.0575% ( 8) 00:07:42.953 17.329 - 17.428: 99.1199% ( 11) 00:07:42.953 17.428 - 17.526: 99.1540% ( 6) 00:07:42.953 17.526 - 17.625: 99.2221% ( 12) 00:07:42.953 17.625 - 17.723: 99.2505% ( 5) 00:07:42.953 17.723 - 17.822: 99.3016% ( 9) 00:07:42.953 17.822 - 17.920: 99.3243% ( 4) 00:07:42.953 17.920 - 18.018: 99.3357% ( 2) 00:07:42.953 18.018 - 18.117: 99.3414% ( 1) 00:07:42.953 18.215 - 18.314: 99.3470% ( 1) 00:07:42.953 18.314 - 18.412: 99.3527% ( 1) 00:07:42.953 18.412 - 18.511: 99.3584% ( 1) 00:07:42.953 18.609 - 18.708: 99.3641% ( 1) 00:07:42.953 18.708 - 18.806: 99.3697% ( 1) 00:07:42.953 18.806 - 18.905: 99.3754% ( 1) 00:07:42.953 18.905 - 19.003: 99.3981% ( 4) 00:07:42.953 19.102 - 19.200: 99.4095% ( 2) 00:07:42.953 19.298 - 19.397: 99.4379% ( 5) 00:07:42.953 19.397 - 19.495: 99.4549% ( 3) 00:07:42.953 19.495 - 19.594: 99.5060% ( 9) 00:07:42.953 19.594 - 19.692: 99.5514% ( 8) 00:07:42.953 19.692 - 19.791: 99.6025% ( 9) 00:07:42.953 19.791 - 19.889: 99.6366% ( 6) 00:07:42.953 19.889 - 19.988: 99.6764% ( 7) 00:07:42.953 19.988 - 20.086: 99.6877% ( 2) 00:07:42.953 20.086 - 20.185: 99.7331% ( 8) 00:07:42.953 20.185 - 20.283: 99.7672% ( 6) 00:07:42.953 20.283 - 20.382: 99.7842% ( 3) 00:07:42.953 20.382 - 20.480: 99.7956% ( 2) 00:07:42.953 20.480 - 20.578: 99.8013% ( 1) 00:07:42.953 20.578 - 20.677: 99.8183% ( 3) 00:07:42.953 20.677 - 20.775: 99.8297% ( 2) 00:07:42.953 20.775 - 20.874: 99.8353% ( 1) 00:07:42.953 20.972 - 21.071: 99.8467% ( 2) 00:07:42.953 21.071 - 21.169: 99.8581% ( 2) 00:07:42.953 21.268 - 21.366: 99.8637% ( 1) 00:07:42.953 21.366 - 21.465: 99.8694% ( 1) 00:07:42.953 21.563 - 21.662: 99.8751% ( 1) 00:07:42.953 22.055 - 22.154: 99.8808% ( 1) 00:07:42.953 22.351 - 22.449: 99.8864% ( 1) 00:07:42.953 22.942 - 23.040: 99.8921% ( 1) 00:07:42.953 23.040 - 23.138: 99.8978% ( 1) 00:07:42.953 23.335 - 23.434: 99.9035% ( 1) 00:07:42.953 24.222 - 24.320: 99.9092% ( 1) 00:07:42.953 24.517 - 24.615: 99.9205% ( 2) 00:07:42.953 25.403 - 25.600: 99.9262% ( 1) 00:07:42.953 26.388 - 26.585: 99.9319% ( 1) 00:07:42.953 27.372 - 27.569: 99.9375% ( 1) 00:07:42.953 28.357 - 28.554: 99.9432% ( 1) 00:07:42.953 28.948 - 29.145: 99.9489% ( 1) 00:07:42.953 35.052 - 35.249: 99.9546% ( 1) 00:07:42.953 40.960 - 41.157: 99.9603% ( 1) 00:07:42.953 44.898 - 45.095: 99.9659% ( 1) 00:07:42.953 47.458 - 47.655: 99.9716% ( 1) 00:07:42.953 48.049 - 48.246: 99.9773% ( 1) 00:07:42.953 48.443 - 48.640: 99.9830% ( 1) 00:07:42.953 54.351 - 54.745: 99.9886% ( 1) 00:07:42.953 70.892 - 71.286: 99.9943% ( 1) 00:07:42.953 77.588 - 77.982: 100.0000% ( 1) 00:07:42.953 00:07:42.953 Complete histogram 00:07:42.953 ================== 00:07:42.953 Range in us Cumulative Count 00:07:42.953 7.286 - 7.335: 0.0170% ( 3) 00:07:42.953 7.335 - 7.385: 1.4592% ( 254) 00:07:42.953 7.385 - 7.434: 10.4247% ( 1579) 00:07:42.953 7.434 - 7.483: 31.2401% ( 3666) 00:07:42.953 7.483 - 7.532: 52.9412% ( 3822) 00:07:42.953 7.532 - 7.582: 66.0118% ( 2302) 00:07:42.953 7.582 - 7.631: 72.0929% ( 1071) 00:07:42.953 7.631 - 7.680: 74.5060% ( 425) 00:07:42.953 7.680 - 7.729: 75.6700% ( 205) 00:07:42.953 7.729 - 7.778: 76.1640% ( 87) 00:07:42.953 7.778 - 7.828: 76.4990% ( 59) 00:07:42.953 7.828 - 7.877: 76.6807% ( 32) 00:07:42.953 7.877 - 7.926: 76.7942% ( 20) 00:07:42.953 7.926 - 7.975: 76.8908% ( 17) 00:07:42.953 7.975 - 8.025: 76.9702% ( 14) 00:07:42.953 8.025 - 8.074: 77.0327% ( 11) 00:07:42.953 8.074 - 8.123: 77.0497% ( 3) 00:07:42.953 8.123 - 8.172: 77.0611% ( 2) 00:07:42.953 8.172 - 8.222: 77.0725% ( 2) 00:07:42.953 8.222 - 8.271: 77.0838% ( 2) 00:07:42.953 8.271 - 8.320: 77.0895% ( 1) 00:07:42.953 8.320 - 8.369: 77.1008% ( 2) 00:07:42.953 8.517 - 8.566: 77.1065% ( 1) 00:07:42.953 8.615 - 8.665: 77.1179% ( 2) 00:07:42.953 8.862 - 8.911: 77.1406% ( 4) 00:07:42.953 8.960 - 9.009: 77.1463% ( 1) 00:07:42.953 9.058 - 9.108: 77.1519% ( 1) 00:07:42.953 9.157 - 9.206: 77.1576% ( 1) 00:07:42.953 9.895 - 9.945: 77.1633% ( 1) 00:07:42.953 9.994 - 10.043: 77.1690% ( 1) 00:07:42.953 10.092 - 10.142: 77.1860% ( 3) 00:07:42.953 10.388 - 10.437: 77.1917% ( 1) 00:07:42.953 10.732 - 10.782: 77.1974% ( 1) 00:07:42.953 10.782 - 10.831: 77.2030% ( 1) 00:07:42.953 10.831 - 10.880: 77.2541% ( 9) 00:07:42.953 10.880 - 10.929: 77.4926% ( 42) 00:07:42.953 10.929 - 10.978: 78.1002% ( 107) 00:07:42.953 10.978 - 11.028: 79.1392% ( 183) 00:07:42.953 11.028 - 11.077: 80.5814% ( 254) 00:07:42.953 11.077 - 11.126: 82.4381% ( 327) 00:07:42.953 11.126 - 11.175: 84.9421% ( 441) 00:07:42.953 11.175 - 11.225: 87.8776% ( 517) 00:07:42.953 11.225 - 11.274: 91.1140% ( 570) 00:07:42.953 11.274 - 11.323: 93.5044% ( 421) 00:07:42.953 11.323 - 11.372: 94.9807% ( 260) 00:07:42.953 11.372 - 11.422: 95.8551% ( 154) 00:07:42.953 11.422 - 11.471: 96.4740% ( 109) 00:07:42.953 11.471 - 11.520: 96.9623% ( 86) 00:07:42.953 11.520 - 11.569: 97.2803% ( 56) 00:07:42.953 11.569 - 11.618: 97.5812% ( 53) 00:07:42.953 11.618 - 11.668: 97.7742% ( 34) 00:07:42.953 11.668 - 11.717: 97.8708% ( 17) 00:07:42.953 11.717 - 11.766: 97.9389% ( 12) 00:07:42.953 11.766 - 11.815: 98.0184% ( 14) 00:07:42.953 11.815 - 11.865: 98.0695% ( 9) 00:07:42.953 11.865 - 11.914: 98.1092% ( 7) 00:07:42.953 11.914 - 11.963: 98.1490% ( 7) 00:07:42.953 11.963 - 12.012: 98.1774% ( 5) 00:07:42.953 12.012 - 12.062: 98.2058% ( 5) 00:07:42.953 12.062 - 12.111: 98.2342% ( 5) 00:07:42.953 12.111 - 12.160: 98.2569% ( 4) 00:07:42.954 12.209 - 12.258: 98.2625% ( 1) 00:07:42.954 12.258 - 12.308: 98.2682% ( 1) 00:07:42.954 12.357 - 12.406: 98.2853% ( 3) 00:07:42.954 12.406 - 12.455: 98.2909% ( 1) 00:07:42.954 12.505 - 12.554: 98.3023% ( 2) 00:07:42.954 12.554 - 12.603: 98.3080% ( 1) 00:07:42.954 12.603 - 12.702: 98.3250% ( 3) 00:07:42.954 12.702 - 12.800: 98.3364% ( 2) 00:07:42.954 12.800 - 12.898: 98.3591% ( 4) 00:07:42.954 12.898 - 12.997: 98.3761% ( 3) 00:07:42.954 12.997 - 13.095: 98.4613% ( 15) 00:07:42.954 13.095 - 13.194: 98.5294% ( 12) 00:07:42.954 13.194 - 13.292: 98.6430% ( 20) 00:07:42.954 13.292 - 13.391: 98.7281% ( 15) 00:07:42.954 13.391 - 13.489: 98.8133% ( 15) 00:07:42.954 13.489 - 13.588: 98.8985% ( 15) 00:07:42.954 13.588 - 13.686: 98.9893% ( 16) 00:07:42.954 13.686 - 13.785: 99.0518% ( 11) 00:07:42.954 13.785 - 13.883: 99.1256% ( 13) 00:07:42.954 13.883 - 13.982: 99.1994% ( 13) 00:07:42.954 13.982 - 14.080: 99.2562% ( 10) 00:07:42.954 14.080 - 14.178: 99.2675% ( 2) 00:07:42.954 14.178 - 14.277: 99.3300% ( 11) 00:07:42.954 14.277 - 14.375: 99.3584% ( 5) 00:07:42.954 14.375 - 14.474: 99.3697% ( 2) 00:07:42.954 14.474 - 14.572: 99.3981% ( 5) 00:07:42.954 14.572 - 14.671: 99.4095% ( 2) 00:07:42.954 14.671 - 14.769: 99.4208% ( 2) 00:07:42.954 14.769 - 14.868: 99.4265% ( 1) 00:07:42.954 15.360 - 15.458: 99.4379% ( 2) 00:07:42.954 15.557 - 15.655: 99.4436% ( 1) 00:07:42.954 15.655 - 15.754: 99.4492% ( 1) 00:07:42.954 16.345 - 16.443: 99.4606% ( 2) 00:07:42.954 16.443 - 16.542: 99.4663% ( 1) 00:07:42.954 16.542 - 16.640: 99.4720% ( 1) 00:07:42.954 16.738 - 16.837: 99.4833% ( 2) 00:07:42.954 16.837 - 16.935: 99.4890% ( 1) 00:07:42.954 16.935 - 17.034: 99.4947% ( 1) 00:07:42.954 17.034 - 17.132: 99.5060% ( 2) 00:07:42.954 17.428 - 17.526: 99.5174% ( 2) 00:07:42.954 17.526 - 17.625: 99.5287% ( 2) 00:07:42.954 17.625 - 17.723: 99.5571% ( 5) 00:07:42.954 17.723 - 17.822: 99.5798% ( 4) 00:07:42.954 17.822 - 17.920: 99.6139% ( 6) 00:07:42.954 17.920 - 18.018: 99.6423% ( 5) 00:07:42.954 18.018 - 18.117: 99.6764% ( 6) 00:07:42.954 18.117 - 18.215: 99.6991% ( 4) 00:07:42.954 18.215 - 18.314: 99.7161% ( 3) 00:07:42.954 18.314 - 18.412: 99.7331% ( 3) 00:07:42.954 18.412 - 18.511: 99.7445% ( 2) 00:07:42.954 18.511 - 18.609: 99.7786% ( 6) 00:07:42.954 18.609 - 18.708: 99.8013% ( 4) 00:07:42.954 18.708 - 18.806: 99.8069% ( 1) 00:07:42.954 18.905 - 19.003: 99.8240% ( 3) 00:07:42.954 19.003 - 19.102: 99.8297% ( 1) 00:07:42.954 19.102 - 19.200: 99.8467% ( 3) 00:07:42.954 19.298 - 19.397: 99.8524% ( 1) 00:07:42.954 19.397 - 19.495: 99.8581% ( 1) 00:07:42.954 19.594 - 19.692: 99.8637% ( 1) 00:07:42.954 19.692 - 19.791: 99.8694% ( 1) 00:07:42.954 19.988 - 20.086: 99.8808% ( 2) 00:07:42.954 20.382 - 20.480: 99.8864% ( 1) 00:07:42.954 20.972 - 21.071: 99.8921% ( 1) 00:07:42.954 22.252 - 22.351: 99.8978% ( 1) 00:07:42.954 22.351 - 22.449: 99.9035% ( 1) 00:07:42.954 22.843 - 22.942: 99.9092% ( 1) 00:07:42.954 23.631 - 23.729: 99.9148% ( 1) 00:07:42.954 23.729 - 23.828: 99.9262% ( 2) 00:07:42.954 24.615 - 24.714: 99.9319% ( 1) 00:07:42.954 24.812 - 24.911: 99.9375% ( 1) 00:07:42.954 24.911 - 25.009: 99.9432% ( 1) 00:07:42.954 25.403 - 25.600: 99.9489% ( 1) 00:07:42.954 27.175 - 27.372: 99.9546% ( 1) 00:07:42.954 27.766 - 27.963: 99.9659% ( 2) 00:07:42.954 28.160 - 28.357: 99.9716% ( 1) 00:07:42.954 34.462 - 34.658: 99.9773% ( 1) 00:07:42.954 34.658 - 34.855: 99.9830% ( 1) 00:07:42.954 36.431 - 36.628: 99.9886% ( 1) 00:07:42.954 46.474 - 46.671: 99.9943% ( 1) 00:07:42.954 63.409 - 63.803: 100.0000% ( 1) 00:07:42.954 00:07:42.954 ************************************ 00:07:42.954 END TEST nvme_overhead 00:07:42.954 ************************************ 00:07:42.954 00:07:42.954 real 0m1.215s 00:07:42.954 user 0m1.063s 00:07:42.954 sys 0m0.106s 00:07:42.954 14:44:28 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.954 14:44:28 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:42.954 14:44:28 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:42.954 14:44:28 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:42.954 14:44:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.954 14:44:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.954 ************************************ 00:07:42.954 START TEST nvme_arbitration 00:07:42.954 ************************************ 00:07:42.954 14:44:28 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:46.235 Initializing NVMe Controllers 00:07:46.235 Attached to 0000:00:11.0 00:07:46.235 Attached to 0000:00:13.0 00:07:46.235 Attached to 0000:00:10.0 00:07:46.235 Attached to 0000:00:12.0 00:07:46.235 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:07:46.235 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:07:46.235 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:07:46.235 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:46.235 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:46.235 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:46.235 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:46.235 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:46.235 Initialization complete. Launching workers. 00:07:46.235 Starting thread on core 1 with urgent priority queue 00:07:46.235 Starting thread on core 2 with urgent priority queue 00:07:46.235 Starting thread on core 3 with urgent priority queue 00:07:46.235 Starting thread on core 0 with urgent priority queue 00:07:46.235 QEMU NVMe Ctrl (12341 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:07:46.235 QEMU NVMe Ctrl (12342 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:07:46.235 QEMU NVMe Ctrl (12343 ) core 1: 981.33 IO/s 101.90 secs/100000 ios 00:07:46.235 QEMU NVMe Ctrl (12342 ) core 1: 981.33 IO/s 101.90 secs/100000 ios 00:07:46.235 QEMU NVMe Ctrl (12340 ) core 2: 1002.67 IO/s 99.73 secs/100000 ios 00:07:46.235 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:07:46.235 ======================================================== 00:07:46.235 00:07:46.235 00:07:46.235 real 0m3.308s 00:07:46.235 user 0m9.268s 00:07:46.235 sys 0m0.106s 00:07:46.235 14:44:31 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.235 ************************************ 00:07:46.235 END TEST nvme_arbitration 00:07:46.235 ************************************ 00:07:46.235 14:44:31 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:46.235 14:44:31 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:46.235 14:44:31 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:46.235 14:44:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.235 14:44:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.235 ************************************ 00:07:46.235 START TEST nvme_single_aen 00:07:46.235 ************************************ 00:07:46.235 14:44:31 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:46.494 Asynchronous Event Request test 00:07:46.494 Attached to 0000:00:11.0 00:07:46.494 Attached to 0000:00:13.0 00:07:46.494 Attached to 0000:00:10.0 00:07:46.494 Attached to 0000:00:12.0 00:07:46.494 Reset controller to setup AER completions for this process 00:07:46.494 Registering asynchronous event callbacks... 00:07:46.494 Getting orig temperature thresholds of all controllers 00:07:46.494 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.494 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.494 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.494 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.494 Setting all controllers temperature threshold low to trigger AER 00:07:46.494 Waiting for all controllers temperature threshold to be set lower 00:07:46.494 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.494 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:46.494 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.494 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:46.494 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.494 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:46.494 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.494 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:46.494 Waiting for all controllers to trigger AER and reset threshold 00:07:46.494 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.494 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.494 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.494 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.494 Cleaning up... 00:07:46.494 00:07:46.494 real 0m0.204s 00:07:46.494 user 0m0.076s 00:07:46.494 sys 0m0.085s 00:07:46.494 14:44:31 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.494 14:44:31 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:46.494 ************************************ 00:07:46.494 END TEST nvme_single_aen 00:07:46.494 ************************************ 00:07:46.494 14:44:31 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:46.494 14:44:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.494 14:44:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.494 14:44:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.494 ************************************ 00:07:46.494 START TEST nvme_doorbell_aers 00:07:46.494 ************************************ 00:07:46.494 14:44:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:46.494 14:44:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:46.494 14:44:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:46.494 14:44:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:46.494 14:44:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:46.494 14:44:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:46.494 14:44:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:46.494 14:44:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:46.494 14:44:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:46.494 14:44:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:46.753 14:44:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:46.753 14:44:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:46.753 14:44:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:46.753 14:44:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:46.753 [2024-11-17 14:44:32.286694] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:07:56.778 Executing: test_write_invalid_db 00:07:56.778 Waiting for AER completion... 00:07:56.778 Failure: test_write_invalid_db 00:07:56.778 00:07:56.778 Executing: test_invalid_db_write_overflow_sq 00:07:56.778 Waiting for AER completion... 00:07:56.778 Failure: test_invalid_db_write_overflow_sq 00:07:56.778 00:07:56.778 Executing: test_invalid_db_write_overflow_cq 00:07:56.778 Waiting for AER completion... 00:07:56.778 Failure: test_invalid_db_write_overflow_cq 00:07:56.778 00:07:56.778 14:44:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:56.778 14:44:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:57.036 [2024-11-17 14:44:42.322542] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:06.997 Executing: test_write_invalid_db 00:08:06.997 Waiting for AER completion... 00:08:06.997 Failure: test_write_invalid_db 00:08:06.997 00:08:06.997 Executing: test_invalid_db_write_overflow_sq 00:08:06.997 Waiting for AER completion... 00:08:06.997 Failure: test_invalid_db_write_overflow_sq 00:08:06.997 00:08:06.997 Executing: test_invalid_db_write_overflow_cq 00:08:06.997 Waiting for AER completion... 00:08:06.997 Failure: test_invalid_db_write_overflow_cq 00:08:06.997 00:08:06.997 14:44:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:06.997 14:44:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:06.997 [2024-11-17 14:44:52.361743] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:16.988 Executing: test_write_invalid_db 00:08:16.988 Waiting for AER completion... 00:08:16.988 Failure: test_write_invalid_db 00:08:16.988 00:08:16.988 Executing: test_invalid_db_write_overflow_sq 00:08:16.988 Waiting for AER completion... 00:08:16.988 Failure: test_invalid_db_write_overflow_sq 00:08:16.988 00:08:16.988 Executing: test_invalid_db_write_overflow_cq 00:08:16.988 Waiting for AER completion... 00:08:16.988 Failure: test_invalid_db_write_overflow_cq 00:08:16.988 00:08:16.988 14:45:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:16.988 14:45:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:16.988 [2024-11-17 14:45:02.374595] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.099 Executing: test_write_invalid_db 00:08:27.099 Waiting for AER completion... 00:08:27.099 Failure: test_write_invalid_db 00:08:27.099 00:08:27.099 Executing: test_invalid_db_write_overflow_sq 00:08:27.099 Waiting for AER completion... 00:08:27.099 Failure: test_invalid_db_write_overflow_sq 00:08:27.099 00:08:27.099 Executing: test_invalid_db_write_overflow_cq 00:08:27.099 Waiting for AER completion... 00:08:27.099 Failure: test_invalid_db_write_overflow_cq 00:08:27.099 00:08:27.099 00:08:27.099 real 0m40.199s 00:08:27.099 user 0m34.069s 00:08:27.099 sys 0m5.729s 00:08:27.099 14:45:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.099 14:45:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:27.099 ************************************ 00:08:27.099 END TEST nvme_doorbell_aers 00:08:27.099 ************************************ 00:08:27.099 14:45:12 nvme -- nvme/nvme.sh@97 -- # uname 00:08:27.099 14:45:12 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:27.099 14:45:12 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:27.099 14:45:12 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:27.099 14:45:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.099 14:45:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.099 ************************************ 00:08:27.099 START TEST nvme_multi_aen 00:08:27.099 ************************************ 00:08:27.099 14:45:12 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:27.099 [2024-11-17 14:45:12.420197] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.099 [2024-11-17 14:45:12.420247] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.099 [2024-11-17 14:45:12.420257] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.099 [2024-11-17 14:45:12.421338] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.099 [2024-11-17 14:45:12.421357] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.099 [2024-11-17 14:45:12.421364] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.100 [2024-11-17 14:45:12.422462] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.100 [2024-11-17 14:45:12.422552] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.100 [2024-11-17 14:45:12.422603] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.100 [2024-11-17 14:45:12.423577] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.100 [2024-11-17 14:45:12.423666] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.100 [2024-11-17 14:45:12.423714] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63076) is not found. Dropping the request. 00:08:27.100 Child process pid: 63597 00:08:27.100 [Child] Asynchronous Event Request test 00:08:27.100 [Child] Attached to 0000:00:11.0 00:08:27.100 [Child] Attached to 0000:00:13.0 00:08:27.100 [Child] Attached to 0000:00:10.0 00:08:27.100 [Child] Attached to 0000:00:12.0 00:08:27.100 [Child] Registering asynchronous event callbacks... 00:08:27.100 [Child] Getting orig temperature thresholds of all controllers 00:08:27.100 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.100 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.100 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.100 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.100 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:27.100 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.100 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.100 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.100 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.100 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.100 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.100 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.100 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.100 [Child] Cleaning up... 00:08:27.358 Asynchronous Event Request test 00:08:27.358 Attached to 0000:00:11.0 00:08:27.358 Attached to 0000:00:13.0 00:08:27.358 Attached to 0000:00:10.0 00:08:27.358 Attached to 0000:00:12.0 00:08:27.358 Reset controller to setup AER completions for this process 00:08:27.358 Registering asynchronous event callbacks... 00:08:27.358 Getting orig temperature thresholds of all controllers 00:08:27.358 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.358 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.358 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.358 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.358 Setting all controllers temperature threshold low to trigger AER 00:08:27.358 Waiting for all controllers temperature threshold to be set lower 00:08:27.358 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.358 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:27.358 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.358 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:27.358 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.358 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:27.358 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.358 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:27.358 Waiting for all controllers to trigger AER and reset threshold 00:08:27.358 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.358 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.358 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.358 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.358 Cleaning up... 00:08:27.358 00:08:27.358 real 0m0.421s 00:08:27.358 user 0m0.133s 00:08:27.358 sys 0m0.186s 00:08:27.358 14:45:12 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.358 14:45:12 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:27.358 ************************************ 00:08:27.358 END TEST nvme_multi_aen 00:08:27.358 ************************************ 00:08:27.358 14:45:12 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:27.358 14:45:12 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:27.358 14:45:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.358 14:45:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.358 ************************************ 00:08:27.358 START TEST nvme_startup 00:08:27.358 ************************************ 00:08:27.358 14:45:12 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:27.617 Initializing NVMe Controllers 00:08:27.617 Attached to 0000:00:11.0 00:08:27.617 Attached to 0000:00:13.0 00:08:27.617 Attached to 0000:00:10.0 00:08:27.617 Attached to 0000:00:12.0 00:08:27.617 Initialization complete. 00:08:27.617 Time used:144323.156 (us). 00:08:27.617 ************************************ 00:08:27.617 END TEST nvme_startup 00:08:27.617 ************************************ 00:08:27.617 00:08:27.617 real 0m0.209s 00:08:27.617 user 0m0.073s 00:08:27.617 sys 0m0.092s 00:08:27.617 14:45:12 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.617 14:45:12 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:27.617 14:45:12 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:27.617 14:45:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.617 14:45:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.617 14:45:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.617 ************************************ 00:08:27.617 START TEST nvme_multi_secondary 00:08:27.617 ************************************ 00:08:27.617 14:45:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:27.617 14:45:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63648 00:08:27.617 14:45:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63649 00:08:27.617 14:45:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:27.617 14:45:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:27.617 14:45:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:30.899 Initializing NVMe Controllers 00:08:30.899 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.899 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.899 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.899 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.899 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:30.899 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:30.899 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:30.899 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:30.899 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:30.899 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:30.899 Initialization complete. Launching workers. 00:08:30.899 ======================================================== 00:08:30.899 Latency(us) 00:08:30.899 Device Information : IOPS MiB/s Average min max 00:08:30.899 PCIE (0000:00:11.0) NSID 1 from core 2: 3253.35 12.71 4917.64 797.57 12898.98 00:08:30.899 PCIE (0000:00:13.0) NSID 1 from core 2: 3253.35 12.71 4917.70 801.66 16966.34 00:08:30.899 PCIE (0000:00:10.0) NSID 1 from core 2: 3253.35 12.71 4916.80 772.96 13719.32 00:08:30.899 PCIE (0000:00:12.0) NSID 1 from core 2: 3253.35 12.71 4917.60 799.97 13753.64 00:08:30.899 PCIE (0000:00:12.0) NSID 2 from core 2: 3253.35 12.71 4917.34 791.87 13290.52 00:08:30.899 PCIE (0000:00:12.0) NSID 3 from core 2: 3253.35 12.71 4917.32 800.84 12790.74 00:08:30.899 ======================================================== 00:08:30.899 Total : 19520.11 76.25 4917.40 772.96 16966.34 00:08:30.899 00:08:30.899 14:45:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63648 00:08:30.899 Initializing NVMe Controllers 00:08:30.899 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.899 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.899 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.899 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.899 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:30.899 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:30.899 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:30.899 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:30.899 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:30.899 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:30.899 Initialization complete. Launching workers. 00:08:30.899 ======================================================== 00:08:30.899 Latency(us) 00:08:30.899 Device Information : IOPS MiB/s Average min max 00:08:30.899 PCIE (0000:00:11.0) NSID 1 from core 1: 7831.53 30.59 2042.59 733.09 7018.34 00:08:30.899 PCIE (0000:00:13.0) NSID 1 from core 1: 7831.53 30.59 2042.53 724.62 6241.24 00:08:30.899 PCIE (0000:00:10.0) NSID 1 from core 1: 7831.53 30.59 2041.58 704.17 6828.63 00:08:30.899 PCIE (0000:00:12.0) NSID 1 from core 1: 7831.53 30.59 2042.46 724.95 7153.07 00:08:30.899 PCIE (0000:00:12.0) NSID 2 from core 1: 7831.53 30.59 2042.50 720.37 6950.23 00:08:30.899 PCIE (0000:00:12.0) NSID 3 from core 1: 7831.53 30.59 2042.47 732.07 6533.84 00:08:30.899 ======================================================== 00:08:30.899 Total : 46989.19 183.55 2042.36 704.17 7153.07 00:08:30.899 00:08:33.431 Initializing NVMe Controllers 00:08:33.431 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.431 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.431 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.431 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.431 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:33.431 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:33.431 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:33.431 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:33.431 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:33.431 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:33.431 Initialization complete. Launching workers. 00:08:33.431 ======================================================== 00:08:33.431 Latency(us) 00:08:33.431 Device Information : IOPS MiB/s Average min max 00:08:33.431 PCIE (0000:00:11.0) NSID 1 from core 0: 11106.10 43.38 1440.27 705.68 5872.91 00:08:33.431 PCIE (0000:00:13.0) NSID 1 from core 0: 11106.10 43.38 1440.24 703.01 6151.00 00:08:33.431 PCIE (0000:00:10.0) NSID 1 from core 0: 11106.10 43.38 1439.38 681.64 6001.68 00:08:33.431 PCIE (0000:00:12.0) NSID 1 from core 0: 11106.10 43.38 1440.19 685.83 6529.85 00:08:33.431 PCIE (0000:00:12.0) NSID 2 from core 0: 11106.10 43.38 1440.16 592.03 6502.56 00:08:33.431 PCIE (0000:00:12.0) NSID 3 from core 0: 11106.10 43.38 1440.15 554.49 6156.25 00:08:33.431 ======================================================== 00:08:33.431 Total : 66636.62 260.30 1440.06 554.49 6529.85 00:08:33.431 00:08:33.431 14:45:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63649 00:08:33.431 14:45:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63723 00:08:33.431 14:45:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:33.431 14:45:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:33.431 14:45:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63724 00:08:33.431 14:45:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:36.715 Initializing NVMe Controllers 00:08:36.715 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.715 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.715 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.715 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.715 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:36.715 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:36.715 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:36.715 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:36.715 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:36.715 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:36.715 Initialization complete. Launching workers. 00:08:36.715 ======================================================== 00:08:36.715 Latency(us) 00:08:36.715 Device Information : IOPS MiB/s Average min max 00:08:36.715 PCIE (0000:00:11.0) NSID 1 from core 1: 7983.20 31.18 2003.78 723.72 5911.26 00:08:36.715 PCIE (0000:00:13.0) NSID 1 from core 1: 7983.20 31.18 2003.94 714.92 6510.47 00:08:36.715 PCIE (0000:00:10.0) NSID 1 from core 1: 7983.20 31.18 2003.04 692.86 6095.92 00:08:36.715 PCIE (0000:00:12.0) NSID 1 from core 1: 7983.20 31.18 2004.03 698.71 6009.65 00:08:36.715 PCIE (0000:00:12.0) NSID 2 from core 1: 7983.20 31.18 2003.99 715.45 6152.81 00:08:36.715 PCIE (0000:00:12.0) NSID 3 from core 1: 7983.20 31.18 2003.97 712.63 5969.07 00:08:36.715 ======================================================== 00:08:36.715 Total : 47899.20 187.11 2003.79 692.86 6510.47 00:08:36.715 00:08:36.715 Initializing NVMe Controllers 00:08:36.715 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.715 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.715 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.715 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.715 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:36.715 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:36.715 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:36.715 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:36.715 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:36.715 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:36.715 Initialization complete. Launching workers. 00:08:36.715 ======================================================== 00:08:36.715 Latency(us) 00:08:36.715 Device Information : IOPS MiB/s Average min max 00:08:36.715 PCIE (0000:00:11.0) NSID 1 from core 0: 7935.11 31.00 2015.93 715.97 6471.71 00:08:36.715 PCIE (0000:00:13.0) NSID 1 from core 0: 7935.11 31.00 2015.95 726.86 5829.80 00:08:36.715 PCIE (0000:00:10.0) NSID 1 from core 0: 7935.11 31.00 2014.97 699.69 5746.79 00:08:36.715 PCIE (0000:00:12.0) NSID 1 from core 0: 7935.11 31.00 2015.81 711.64 6260.03 00:08:36.715 PCIE (0000:00:12.0) NSID 2 from core 0: 7935.11 31.00 2015.76 732.45 6359.47 00:08:36.715 PCIE (0000:00:12.0) NSID 3 from core 0: 7935.11 31.00 2015.71 726.37 6610.27 00:08:36.715 ======================================================== 00:08:36.715 Total : 47610.65 185.98 2015.69 699.69 6610.27 00:08:36.715 00:08:38.617 Initializing NVMe Controllers 00:08:38.617 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:38.617 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:38.617 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:38.617 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:38.617 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:38.617 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:38.617 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:38.617 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:38.617 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:38.617 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:38.617 Initialization complete. Launching workers. 00:08:38.617 ======================================================== 00:08:38.617 Latency(us) 00:08:38.617 Device Information : IOPS MiB/s Average min max 00:08:38.617 PCIE (0000:00:11.0) NSID 1 from core 2: 4827.23 18.86 3313.85 748.71 12591.54 00:08:38.617 PCIE (0000:00:13.0) NSID 1 from core 2: 4827.23 18.86 3313.97 746.44 12555.43 00:08:38.617 PCIE (0000:00:10.0) NSID 1 from core 2: 4827.23 18.86 3312.90 736.34 12652.42 00:08:38.617 PCIE (0000:00:12.0) NSID 1 from core 2: 4827.23 18.86 3313.56 701.64 12582.38 00:08:38.617 PCIE (0000:00:12.0) NSID 2 from core 2: 4827.23 18.86 3313.76 740.10 12816.84 00:08:38.617 PCIE (0000:00:12.0) NSID 3 from core 2: 4827.23 18.86 3313.23 745.68 12754.49 00:08:38.617 ======================================================== 00:08:38.617 Total : 28963.39 113.14 3313.54 701.64 12816.84 00:08:38.617 00:08:38.617 ************************************ 00:08:38.617 END TEST nvme_multi_secondary 00:08:38.617 ************************************ 00:08:38.617 14:45:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63723 00:08:38.617 14:45:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63724 00:08:38.617 00:08:38.617 real 0m10.815s 00:08:38.617 user 0m18.409s 00:08:38.617 sys 0m0.646s 00:08:38.617 14:45:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.617 14:45:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:38.617 14:45:23 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:38.617 14:45:23 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:38.617 14:45:23 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62679 ]] 00:08:38.617 14:45:23 nvme -- common/autotest_common.sh@1094 -- # kill 62679 00:08:38.617 14:45:23 nvme -- common/autotest_common.sh@1095 -- # wait 62679 00:08:38.617 [2024-11-17 14:45:23.805028] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.805119] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.805155] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.805179] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.807681] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.807722] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.807735] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.807748] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.809442] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.809480] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.809492] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.809504] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.811179] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.811217] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.811228] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 [2024-11-17 14:45:23.811241] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63596) is not found. Dropping the request. 00:08:38.617 14:45:23 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:38.617 14:45:23 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:38.617 14:45:23 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:38.617 14:45:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.617 14:45:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.617 14:45:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.617 ************************************ 00:08:38.617 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:38.617 ************************************ 00:08:38.617 14:45:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:38.617 * Looking for test storage... 00:08:38.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.618 --rc genhtml_branch_coverage=1 00:08:38.618 --rc genhtml_function_coverage=1 00:08:38.618 --rc genhtml_legend=1 00:08:38.618 --rc geninfo_all_blocks=1 00:08:38.618 --rc geninfo_unexecuted_blocks=1 00:08:38.618 00:08:38.618 ' 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.618 --rc genhtml_branch_coverage=1 00:08:38.618 --rc genhtml_function_coverage=1 00:08:38.618 --rc genhtml_legend=1 00:08:38.618 --rc geninfo_all_blocks=1 00:08:38.618 --rc geninfo_unexecuted_blocks=1 00:08:38.618 00:08:38.618 ' 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.618 --rc genhtml_branch_coverage=1 00:08:38.618 --rc genhtml_function_coverage=1 00:08:38.618 --rc genhtml_legend=1 00:08:38.618 --rc geninfo_all_blocks=1 00:08:38.618 --rc geninfo_unexecuted_blocks=1 00:08:38.618 00:08:38.618 ' 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.618 --rc genhtml_branch_coverage=1 00:08:38.618 --rc genhtml_function_coverage=1 00:08:38.618 --rc genhtml_legend=1 00:08:38.618 --rc geninfo_all_blocks=1 00:08:38.618 --rc geninfo_unexecuted_blocks=1 00:08:38.618 00:08:38.618 ' 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:38.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63888 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63888 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 63888 ']' 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.618 14:45:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.877 [2024-11-17 14:45:24.206595] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:38.877 [2024-11-17 14:45:24.206815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63888 ] 00:08:38.877 [2024-11-17 14:45:24.370397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.135 [2024-11-17 14:45:24.548205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.135 [2024-11-17 14:45:24.548568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.135 [2024-11-17 14:45:24.548589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.135 [2024-11-17 14:45:24.548398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.702 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.702 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:39.702 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:39.703 nvme0n1 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_AoCNG.txt 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:39.703 true 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731854725 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63911 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:39.703 14:45:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:42.232 [2024-11-17 14:45:27.238706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:42.232 [2024-11-17 14:45:27.238948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:42.232 [2024-11-17 14:45:27.238969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:42.232 [2024-11-17 14:45:27.238980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.232 [2024-11-17 14:45:27.240596] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:42.232 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63911 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63911 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63911 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_AoCNG.txt 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_AoCNG.txt 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63888 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 63888 ']' 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 63888 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63888 00:08:42.232 killing process with pid 63888 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63888' 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 63888 00:08:42.232 14:45:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 63888 00:08:43.173 14:45:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:43.174 14:45:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:43.174 00:08:43.174 real 0m4.535s 00:08:43.174 user 0m15.991s 00:08:43.174 sys 0m0.445s 00:08:43.174 14:45:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.174 14:45:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:43.174 ************************************ 00:08:43.174 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:43.174 ************************************ 00:08:43.174 14:45:28 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:43.174 14:45:28 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:43.174 14:45:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.174 14:45:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.174 14:45:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.174 ************************************ 00:08:43.174 START TEST nvme_fio 00:08:43.174 ************************************ 00:08:43.174 14:45:28 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:43.174 14:45:28 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:43.174 14:45:28 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:43.174 14:45:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:43.174 14:45:28 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:43.174 14:45:28 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:43.174 14:45:28 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:43.174 14:45:28 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:43.174 14:45:28 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:43.174 14:45:28 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:43.174 14:45:28 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:43.174 14:45:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:43.174 14:45:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:43.174 14:45:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:43.174 14:45:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:43.174 14:45:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:43.435 14:45:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:43.435 14:45:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:43.696 14:45:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:43.696 14:45:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:43.696 14:45:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:43.956 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:43.956 fio-3.35 00:08:43.956 Starting 1 thread 00:08:49.236 00:08:49.236 test: (groupid=0, jobs=1): err= 0: pid=64048: Sun Nov 17 14:45:34 2024 00:08:49.236 read: IOPS=20.3k, BW=79.3MiB/s (83.2MB/s)(159MiB/2001msec) 00:08:49.236 slat (nsec): min=4075, max=80596, avg=5319.36, stdev=2597.70 00:08:49.236 clat (usec): min=192, max=10661, avg=3133.22, stdev=1178.36 00:08:49.236 lat (usec): min=197, max=10711, avg=3138.54, stdev=1179.63 00:08:49.236 clat percentiles (usec): 00:08:49.236 | 1.00th=[ 1729], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2343], 00:08:49.236 | 30.00th=[ 2409], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2868], 00:08:49.236 | 70.00th=[ 3163], 80.00th=[ 3752], 90.00th=[ 4948], 95.00th=[ 5866], 00:08:49.236 | 99.00th=[ 7111], 99.50th=[ 7439], 99.90th=[ 8356], 99.95th=[ 8979], 00:08:49.236 | 99.99th=[ 9896] 00:08:49.236 bw ( KiB/s): min=68392, max=81832, per=94.78%, avg=76968.00, stdev=7449.34, samples=3 00:08:49.236 iops : min=17098, max=20458, avg=19242.00, stdev=1862.33, samples=3 00:08:49.237 write: IOPS=20.3k, BW=79.1MiB/s (83.0MB/s)(158MiB/2001msec); 0 zone resets 00:08:49.237 slat (nsec): min=4215, max=80300, avg=5475.57, stdev=2642.73 00:08:49.237 clat (usec): min=208, max=9922, avg=3153.27, stdev=1183.34 00:08:49.237 lat (usec): min=212, max=9937, avg=3158.75, stdev=1184.59 00:08:49.237 clat percentiles (usec): 00:08:49.237 | 1.00th=[ 1729], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2376], 00:08:49.237 | 30.00th=[ 2442], 40.00th=[ 2573], 50.00th=[ 2704], 60.00th=[ 2868], 00:08:49.237 | 70.00th=[ 3195], 80.00th=[ 3752], 90.00th=[ 5014], 95.00th=[ 5932], 00:08:49.237 | 99.00th=[ 7177], 99.50th=[ 7504], 99.90th=[ 8586], 99.95th=[ 9241], 00:08:49.237 | 99.99th=[ 9765] 00:08:49.237 bw ( KiB/s): min=68576, max=81992, per=95.08%, avg=77045.33, stdev=7369.13, samples=3 00:08:49.237 iops : min=17144, max=20498, avg=19261.33, stdev=1842.28, samples=3 00:08:49.237 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:08:49.237 lat (msec) : 2=2.42%, 4=80.59%, 10=16.90%, 20=0.01% 00:08:49.237 cpu : usr=98.95%, sys=0.05%, ctx=3, majf=0, minf=608 00:08:49.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:49.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:49.237 issued rwts: total=40622,40537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.237 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:49.237 00:08:49.237 Run status group 0 (all jobs): 00:08:49.237 READ: bw=79.3MiB/s (83.2MB/s), 79.3MiB/s-79.3MiB/s (83.2MB/s-83.2MB/s), io=159MiB (166MB), run=2001-2001msec 00:08:49.237 WRITE: bw=79.1MiB/s (83.0MB/s), 79.1MiB/s-79.1MiB/s (83.0MB/s-83.0MB/s), io=158MiB (166MB), run=2001-2001msec 00:08:49.237 ----------------------------------------------------- 00:08:49.237 Suppressions used: 00:08:49.237 count bytes template 00:08:49.237 1 32 /usr/src/fio/parse.c 00:08:49.237 1 8 libtcmalloc_minimal.so 00:08:49.237 ----------------------------------------------------- 00:08:49.237 00:08:49.237 14:45:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:49.237 14:45:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:49.237 14:45:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:49.237 14:45:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:49.237 14:45:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:49.237 14:45:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:49.498 14:45:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:49.498 14:45:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:49.498 14:45:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:49.498 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:49.498 fio-3.35 00:08:49.498 Starting 1 thread 00:08:56.081 00:08:56.081 test: (groupid=0, jobs=1): err= 0: pid=64112: Sun Nov 17 14:45:41 2024 00:08:56.081 read: IOPS=19.9k, BW=77.9MiB/s (81.7MB/s)(156MiB/2001msec) 00:08:56.081 slat (nsec): min=3374, max=71949, avg=5298.73, stdev=2473.60 00:08:56.081 clat (usec): min=380, max=11521, avg=3187.67, stdev=1119.89 00:08:56.081 lat (usec): min=385, max=11584, avg=3192.97, stdev=1121.04 00:08:56.081 clat percentiles (usec): 00:08:56.081 | 1.00th=[ 2008], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2474], 00:08:56.081 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2933], 00:08:56.081 | 70.00th=[ 3163], 80.00th=[ 3654], 90.00th=[ 5014], 95.00th=[ 5866], 00:08:56.081 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 8717], 99.95th=[ 9110], 00:08:56.081 | 99.99th=[11469] 00:08:56.081 bw ( KiB/s): min=75488, max=78922, per=97.35%, avg=77675.33, stdev=1900.46, samples=3 00:08:56.081 iops : min=18872, max=19730, avg=19418.67, stdev=474.95, samples=3 00:08:56.081 write: IOPS=19.9k, BW=77.7MiB/s (81.5MB/s)(156MiB/2001msec); 0 zone resets 00:08:56.081 slat (nsec): min=3487, max=82033, avg=5401.96, stdev=2483.10 00:08:56.081 clat (usec): min=355, max=11445, avg=3211.46, stdev=1113.80 00:08:56.081 lat (usec): min=360, max=11463, avg=3216.86, stdev=1114.93 00:08:56.081 clat percentiles (usec): 00:08:56.081 | 1.00th=[ 2024], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2474], 00:08:56.081 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2933], 00:08:56.081 | 70.00th=[ 3195], 80.00th=[ 3720], 90.00th=[ 5014], 95.00th=[ 5866], 00:08:56.081 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 8717], 99.95th=[ 9241], 00:08:56.081 | 99.99th=[10683] 00:08:56.081 bw ( KiB/s): min=75672, max=79160, per=97.82%, avg=77864.67, stdev=1909.30, samples=3 00:08:56.081 iops : min=18918, max=19790, avg=19466.00, stdev=477.21, samples=3 00:08:56.081 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:08:56.081 lat (msec) : 2=0.85%, 4=82.14%, 10=16.92%, 20=0.03% 00:08:56.081 cpu : usr=99.15%, sys=0.05%, ctx=2, majf=0, minf=608 00:08:56.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:56.081 issued rwts: total=39915,39820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:56.081 00:08:56.081 Run status group 0 (all jobs): 00:08:56.081 READ: bw=77.9MiB/s (81.7MB/s), 77.9MiB/s-77.9MiB/s (81.7MB/s-81.7MB/s), io=156MiB (163MB), run=2001-2001msec 00:08:56.081 WRITE: bw=77.7MiB/s (81.5MB/s), 77.7MiB/s-77.7MiB/s (81.5MB/s-81.5MB/s), io=156MiB (163MB), run=2001-2001msec 00:08:56.081 ----------------------------------------------------- 00:08:56.081 Suppressions used: 00:08:56.081 count bytes template 00:08:56.081 1 32 /usr/src/fio/parse.c 00:08:56.081 1 8 libtcmalloc_minimal.so 00:08:56.081 ----------------------------------------------------- 00:08:56.081 00:08:56.081 14:45:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:56.081 14:45:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:56.081 14:45:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:56.081 14:45:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:56.343 14:45:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:56.343 14:45:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:56.604 14:45:42 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:56.604 14:45:42 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:56.604 14:45:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:56.866 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:56.866 fio-3.35 00:08:56.866 Starting 1 thread 00:09:02.202 00:09:02.202 test: (groupid=0, jobs=1): err= 0: pid=64172: Sun Nov 17 14:45:47 2024 00:09:02.202 read: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(123MiB/2001msec) 00:09:02.202 slat (usec): min=4, max=665, avg= 6.57, stdev= 5.99 00:09:02.202 clat (usec): min=767, max=12166, avg=4030.69, stdev=1549.59 00:09:02.202 lat (usec): min=774, max=12207, avg=4037.26, stdev=1551.12 00:09:02.202 clat percentiles (usec): 00:09:02.202 | 1.00th=[ 2311], 5.00th=[ 2638], 10.00th=[ 2737], 20.00th=[ 2900], 00:09:02.202 | 30.00th=[ 3032], 40.00th=[ 3195], 50.00th=[ 3359], 60.00th=[ 3621], 00:09:02.202 | 70.00th=[ 4293], 80.00th=[ 5342], 90.00th=[ 6390], 95.00th=[ 7242], 00:09:02.202 | 99.00th=[ 8979], 99.50th=[ 9372], 99.90th=[10683], 99.95th=[11076], 00:09:02.202 | 99.99th=[12125] 00:09:02.202 bw ( KiB/s): min=59976, max=64592, per=98.92%, avg=62493.33, stdev=2336.31, samples=3 00:09:02.202 iops : min=14994, max=16148, avg=15623.33, stdev=584.08, samples=3 00:09:02.202 write: IOPS=15.8k, BW=61.8MiB/s (64.8MB/s)(124MiB/2001msec); 0 zone resets 00:09:02.202 slat (usec): min=4, max=176, avg= 6.71, stdev= 4.05 00:09:02.202 clat (usec): min=448, max=13328, avg=4040.83, stdev=1531.78 00:09:02.202 lat (usec): min=473, max=13398, avg=4047.54, stdev=1533.33 00:09:02.202 clat percentiles (usec): 00:09:02.202 | 1.00th=[ 2311], 5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 2933], 00:09:02.202 | 30.00th=[ 3064], 40.00th=[ 3195], 50.00th=[ 3359], 60.00th=[ 3621], 00:09:02.202 | 70.00th=[ 4293], 80.00th=[ 5342], 90.00th=[ 6390], 95.00th=[ 7242], 00:09:02.202 | 99.00th=[ 8979], 99.50th=[ 9503], 99.90th=[10552], 99.95th=[11338], 00:09:02.202 | 99.99th=[13042] 00:09:02.202 bw ( KiB/s): min=58888, max=65016, per=98.08%, avg=62032.00, stdev=3067.13, samples=3 00:09:02.202 iops : min=14722, max=16254, avg=15508.00, stdev=766.78, samples=3 00:09:02.202 lat (usec) : 500=0.01%, 1000=0.02% 00:09:02.202 lat (msec) : 2=0.31%, 4=66.77%, 10=32.70%, 20=0.20% 00:09:02.202 cpu : usr=97.60%, sys=0.65%, ctx=8, majf=0, minf=607 00:09:02.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:02.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.202 issued rwts: total=31602,31640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.202 00:09:02.202 Run status group 0 (all jobs): 00:09:02.202 READ: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=123MiB (129MB), run=2001-2001msec 00:09:02.202 WRITE: bw=61.8MiB/s (64.8MB/s), 61.8MiB/s-61.8MiB/s (64.8MB/s-64.8MB/s), io=124MiB (130MB), run=2001-2001msec 00:09:02.480 ----------------------------------------------------- 00:09:02.480 Suppressions used: 00:09:02.480 count bytes template 00:09:02.480 1 32 /usr/src/fio/parse.c 00:09:02.480 1 8 libtcmalloc_minimal.so 00:09:02.480 ----------------------------------------------------- 00:09:02.480 00:09:02.480 14:45:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:02.480 14:45:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:02.480 14:45:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:02.480 14:45:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:02.741 14:45:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:02.742 14:45:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:03.003 14:45:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:03.004 14:45:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:03.004 14:45:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:03.265 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:03.265 fio-3.35 00:09:03.265 Starting 1 thread 00:09:11.409 00:09:11.409 test: (groupid=0, jobs=1): err= 0: pid=64233: Sun Nov 17 14:45:55 2024 00:09:11.409 read: IOPS=14.7k, BW=57.2MiB/s (60.0MB/s)(115MiB/2001msec) 00:09:11.409 slat (nsec): min=4250, max=90372, avg=6952.81, stdev=4148.37 00:09:11.409 clat (usec): min=254, max=12532, avg=4334.12, stdev=1510.34 00:09:11.409 lat (usec): min=259, max=12584, avg=4341.08, stdev=1511.78 00:09:11.409 clat percentiles (usec): 00:09:11.409 | 1.00th=[ 2245], 5.00th=[ 2606], 10.00th=[ 2802], 20.00th=[ 2999], 00:09:11.409 | 30.00th=[ 3195], 40.00th=[ 3425], 50.00th=[ 3818], 60.00th=[ 4490], 00:09:11.409 | 70.00th=[ 5145], 80.00th=[ 5800], 90.00th=[ 6521], 95.00th=[ 7046], 00:09:11.409 | 99.00th=[ 8160], 99.50th=[ 8717], 99.90th=[ 9765], 99.95th=[10683], 00:09:11.409 | 99.99th=[12518] 00:09:11.409 bw ( KiB/s): min=57024, max=59512, per=99.75%, avg=58477.33, stdev=1295.76, samples=3 00:09:11.409 iops : min=14256, max=14878, avg=14619.33, stdev=323.94, samples=3 00:09:11.409 write: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(115MiB/2001msec); 0 zone resets 00:09:11.409 slat (nsec): min=4315, max=91375, avg=7081.74, stdev=4254.53 00:09:11.409 clat (usec): min=292, max=12459, avg=4361.12, stdev=1495.92 00:09:11.409 lat (usec): min=297, max=12479, avg=4368.21, stdev=1497.31 00:09:11.409 clat percentiles (usec): 00:09:11.409 | 1.00th=[ 2278], 5.00th=[ 2638], 10.00th=[ 2802], 20.00th=[ 3032], 00:09:11.409 | 30.00th=[ 3228], 40.00th=[ 3458], 50.00th=[ 3851], 60.00th=[ 4555], 00:09:11.409 | 70.00th=[ 5145], 80.00th=[ 5800], 90.00th=[ 6521], 95.00th=[ 7046], 00:09:11.409 | 99.00th=[ 8160], 99.50th=[ 8717], 99.90th=[10159], 99.95th=[11600], 00:09:11.409 | 99.99th=[12387] 00:09:11.409 bw ( KiB/s): min=57056, max=59256, per=99.38%, avg=58346.67, stdev=1148.50, samples=3 00:09:11.409 iops : min=14264, max=14814, avg=14586.67, stdev=287.13, samples=3 00:09:11.409 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:11.409 lat (msec) : 2=0.23%, 4=52.13%, 10=47.51%, 20=0.10% 00:09:11.409 cpu : usr=98.50%, sys=0.00%, ctx=3, majf=0, minf=605 00:09:11.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:11.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.409 issued rwts: total=29326,29371,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.409 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.409 00:09:11.409 Run status group 0 (all jobs): 00:09:11.409 READ: bw=57.2MiB/s (60.0MB/s), 57.2MiB/s-57.2MiB/s (60.0MB/s-60.0MB/s), io=115MiB (120MB), run=2001-2001msec 00:09:11.409 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=115MiB (120MB), run=2001-2001msec 00:09:11.409 ----------------------------------------------------- 00:09:11.409 Suppressions used: 00:09:11.409 count bytes template 00:09:11.409 1 32 /usr/src/fio/parse.c 00:09:11.409 1 8 libtcmalloc_minimal.so 00:09:11.409 ----------------------------------------------------- 00:09:11.409 00:09:11.409 14:45:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:11.409 14:45:56 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:11.409 00:09:11.409 real 0m27.567s 00:09:11.409 user 0m18.567s 00:09:11.409 sys 0m15.127s 00:09:11.409 14:45:56 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.409 14:45:56 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:11.409 ************************************ 00:09:11.409 END TEST nvme_fio 00:09:11.409 ************************************ 00:09:11.409 00:09:11.409 real 1m36.713s 00:09:11.409 user 3m38.709s 00:09:11.409 sys 0m25.594s 00:09:11.409 14:45:56 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.410 ************************************ 00:09:11.410 14:45:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.410 END TEST nvme 00:09:11.410 ************************************ 00:09:11.410 14:45:56 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:11.410 14:45:56 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:11.410 14:45:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.410 14:45:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.410 14:45:56 -- common/autotest_common.sh@10 -- # set +x 00:09:11.410 ************************************ 00:09:11.410 START TEST nvme_scc 00:09:11.410 ************************************ 00:09:11.410 14:45:56 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:11.410 * Looking for test storage... 00:09:11.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:11.410 14:45:56 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:11.410 14:45:56 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:11.410 14:45:56 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:11.410 14:45:56 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:11.410 14:45:56 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.410 14:45:56 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:11.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.410 --rc genhtml_branch_coverage=1 00:09:11.410 --rc genhtml_function_coverage=1 00:09:11.410 --rc genhtml_legend=1 00:09:11.410 --rc geninfo_all_blocks=1 00:09:11.410 --rc geninfo_unexecuted_blocks=1 00:09:11.410 00:09:11.410 ' 00:09:11.410 14:45:56 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:11.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.410 --rc genhtml_branch_coverage=1 00:09:11.410 --rc genhtml_function_coverage=1 00:09:11.410 --rc genhtml_legend=1 00:09:11.410 --rc geninfo_all_blocks=1 00:09:11.410 --rc geninfo_unexecuted_blocks=1 00:09:11.410 00:09:11.410 ' 00:09:11.410 14:45:56 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:11.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.410 --rc genhtml_branch_coverage=1 00:09:11.410 --rc genhtml_function_coverage=1 00:09:11.410 --rc genhtml_legend=1 00:09:11.410 --rc geninfo_all_blocks=1 00:09:11.410 --rc geninfo_unexecuted_blocks=1 00:09:11.410 00:09:11.410 ' 00:09:11.410 14:45:56 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:11.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.410 --rc genhtml_branch_coverage=1 00:09:11.410 --rc genhtml_function_coverage=1 00:09:11.410 --rc genhtml_legend=1 00:09:11.410 --rc geninfo_all_blocks=1 00:09:11.410 --rc geninfo_unexecuted_blocks=1 00:09:11.410 00:09:11.410 ' 00:09:11.410 14:45:56 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.410 14:45:56 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.410 14:45:56 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.410 14:45:56 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.410 14:45:56 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.410 14:45:56 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:11.410 14:45:56 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:11.410 14:45:56 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:11.410 14:45:56 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.410 14:45:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:11.410 14:45:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:11.410 14:45:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:11.410 14:45:56 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:11.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:11.410 Waiting for block devices as requested 00:09:11.410 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:11.672 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:11.672 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:11.672 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:16.962 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:16.962 14:46:02 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:16.962 14:46:02 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:16.962 14:46:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:16.962 14:46:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:16.962 14:46:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:16.962 14:46:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:16.962 14:46:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:16.962 14:46:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:16.962 14:46:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:16.962 14:46:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:16.963 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.964 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.965 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.966 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:16.967 14:46:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:16.967 14:46:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:16.967 14:46:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:16.967 14:46:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.967 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:16.968 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:16.969 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:16.970 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.971 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:16.972 14:46:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:16.972 14:46:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:16.972 14:46:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:16.972 14:46:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.972 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.973 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:16.974 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.975 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:16.976 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:16.977 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:16.978 14:46:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:16.979 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:17.243 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:17.244 14:46:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:17.244 14:46:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:17.244 14:46:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:17.244 14:46:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.244 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.245 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.246 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:17.247 14:46:02 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:17.247 14:46:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:17.248 14:46:02 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:17.248 14:46:02 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:17.248 14:46:02 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:17.248 14:46:02 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:17.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.393 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:18.393 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:18.393 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:18.393 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:18.393 14:46:03 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:18.393 14:46:03 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:18.393 14:46:03 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.393 14:46:03 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:18.393 ************************************ 00:09:18.393 START TEST nvme_simple_copy 00:09:18.393 ************************************ 00:09:18.393 14:46:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:18.652 Initializing NVMe Controllers 00:09:18.652 Attaching to 0000:00:10.0 00:09:18.652 Controller supports SCC. Attached to 0000:00:10.0 00:09:18.652 Namespace ID: 1 size: 6GB 00:09:18.652 Initialization complete. 00:09:18.652 00:09:18.652 Controller QEMU NVMe Ctrl (12340 ) 00:09:18.652 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:18.652 Namespace Block Size:4096 00:09:18.652 Writing LBAs 0 to 63 with Random Data 00:09:18.652 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:18.652 LBAs matching Written Data: 64 00:09:18.652 ************************************ 00:09:18.652 END TEST nvme_simple_copy 00:09:18.652 ************************************ 00:09:18.652 00:09:18.652 real 0m0.283s 00:09:18.652 user 0m0.119s 00:09:18.652 sys 0m0.062s 00:09:18.652 14:46:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.652 14:46:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:18.652 ************************************ 00:09:18.652 END TEST nvme_scc 00:09:18.652 ************************************ 00:09:18.652 00:09:18.652 real 0m7.894s 00:09:18.652 user 0m1.129s 00:09:18.652 sys 0m1.423s 00:09:18.652 14:46:04 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.652 14:46:04 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:18.652 14:46:04 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:18.652 14:46:04 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:18.652 14:46:04 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:18.652 14:46:04 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:18.652 14:46:04 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:18.652 14:46:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.652 14:46:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.652 14:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:18.652 ************************************ 00:09:18.652 START TEST nvme_fdp 00:09:18.652 ************************************ 00:09:18.652 14:46:04 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:18.911 * Looking for test storage... 00:09:18.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:18.911 14:46:04 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.911 14:46:04 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.911 14:46:04 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.911 14:46:04 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:18.911 14:46:04 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.911 14:46:04 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.911 --rc genhtml_branch_coverage=1 00:09:18.911 --rc genhtml_function_coverage=1 00:09:18.911 --rc genhtml_legend=1 00:09:18.911 --rc geninfo_all_blocks=1 00:09:18.911 --rc geninfo_unexecuted_blocks=1 00:09:18.911 00:09:18.911 ' 00:09:18.911 14:46:04 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.911 --rc genhtml_branch_coverage=1 00:09:18.911 --rc genhtml_function_coverage=1 00:09:18.911 --rc genhtml_legend=1 00:09:18.911 --rc geninfo_all_blocks=1 00:09:18.911 --rc geninfo_unexecuted_blocks=1 00:09:18.911 00:09:18.911 ' 00:09:18.911 14:46:04 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.911 --rc genhtml_branch_coverage=1 00:09:18.911 --rc genhtml_function_coverage=1 00:09:18.911 --rc genhtml_legend=1 00:09:18.911 --rc geninfo_all_blocks=1 00:09:18.911 --rc geninfo_unexecuted_blocks=1 00:09:18.911 00:09:18.911 ' 00:09:18.911 14:46:04 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.911 --rc genhtml_branch_coverage=1 00:09:18.911 --rc genhtml_function_coverage=1 00:09:18.911 --rc genhtml_legend=1 00:09:18.911 --rc geninfo_all_blocks=1 00:09:18.911 --rc geninfo_unexecuted_blocks=1 00:09:18.911 00:09:18.911 ' 00:09:18.911 14:46:04 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:18.911 14:46:04 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:18.911 14:46:04 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:18.911 14:46:04 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:18.911 14:46:04 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.911 14:46:04 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.911 14:46:04 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.911 14:46:04 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.911 14:46:04 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.911 14:46:04 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:18.911 14:46:04 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.911 14:46:04 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:18.911 14:46:04 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:18.911 14:46:04 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:18.911 14:46:04 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:18.911 14:46:04 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:18.912 14:46:04 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:18.912 14:46:04 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:18.912 14:46:04 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:18.912 14:46:04 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:18.912 14:46:04 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.912 14:46:04 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:19.172 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:19.432 Waiting for block devices as requested 00:09:19.432 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.432 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.432 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.692 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.992 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:24.992 14:46:10 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:24.992 14:46:10 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:24.992 14:46:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.992 14:46:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:24.992 14:46:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:24.992 14:46:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:24.992 14:46:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:24.992 14:46:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:24.992 14:46:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.992 14:46:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:24.992 14:46:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:24.992 14:46:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:24.992 14:46:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.993 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.995 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.996 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:24.997 14:46:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:24.997 14:46:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:24.997 14:46:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.997 14:46:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:24.998 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:25.000 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:25.002 14:46:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:25.002 14:46:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:25.002 14:46:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:25.002 14:46:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:25.002 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:25.003 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.006 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.007 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:25.008 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:25.009 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:25.010 14:46:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:25.010 14:46:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:25.010 14:46:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:25.010 14:46:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.010 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.011 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:25.012 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:25.013 14:46:10 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:25.013 14:46:10 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:25.014 14:46:10 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:25.014 14:46:10 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:25.014 14:46:10 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:25.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:26.159 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.159 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.159 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.159 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.159 14:46:11 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:26.159 14:46:11 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:26.159 14:46:11 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.159 14:46:11 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:26.159 ************************************ 00:09:26.159 START TEST nvme_flexible_data_placement 00:09:26.159 ************************************ 00:09:26.159 14:46:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:26.420 Initializing NVMe Controllers 00:09:26.420 Attaching to 0000:00:13.0 00:09:26.420 Controller supports FDP Attached to 0000:00:13.0 00:09:26.420 Namespace ID: 1 Endurance Group ID: 1 00:09:26.420 Initialization complete. 00:09:26.420 00:09:26.420 ================================== 00:09:26.420 == FDP tests for Namespace: #01 == 00:09:26.420 ================================== 00:09:26.420 00:09:26.420 Get Feature: FDP: 00:09:26.420 ================= 00:09:26.420 Enabled: Yes 00:09:26.420 FDP configuration Index: 0 00:09:26.420 00:09:26.420 FDP configurations log page 00:09:26.420 =========================== 00:09:26.420 Number of FDP configurations: 1 00:09:26.420 Version: 0 00:09:26.420 Size: 112 00:09:26.420 FDP Configuration Descriptor: 0 00:09:26.420 Descriptor Size: 96 00:09:26.420 Reclaim Group Identifier format: 2 00:09:26.420 FDP Volatile Write Cache: Not Present 00:09:26.420 FDP Configuration: Valid 00:09:26.420 Vendor Specific Size: 0 00:09:26.420 Number of Reclaim Groups: 2 00:09:26.420 Number of Recalim Unit Handles: 8 00:09:26.420 Max Placement Identifiers: 128 00:09:26.420 Number of Namespaces Suppprted: 256 00:09:26.420 Reclaim unit Nominal Size: 6000000 bytes 00:09:26.420 Estimated Reclaim Unit Time Limit: Not Reported 00:09:26.420 RUH Desc #000: RUH Type: Initially Isolated 00:09:26.420 RUH Desc #001: RUH Type: Initially Isolated 00:09:26.420 RUH Desc #002: RUH Type: Initially Isolated 00:09:26.420 RUH Desc #003: RUH Type: Initially Isolated 00:09:26.420 RUH Desc #004: RUH Type: Initially Isolated 00:09:26.420 RUH Desc #005: RUH Type: Initially Isolated 00:09:26.420 RUH Desc #006: RUH Type: Initially Isolated 00:09:26.420 RUH Desc #007: RUH Type: Initially Isolated 00:09:26.420 00:09:26.420 FDP reclaim unit handle usage log page 00:09:26.420 ====================================== 00:09:26.420 Number of Reclaim Unit Handles: 8 00:09:26.420 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:26.420 RUH Usage Desc #001: RUH Attributes: Unused 00:09:26.420 RUH Usage Desc #002: RUH Attributes: Unused 00:09:26.420 RUH Usage Desc #003: RUH Attributes: Unused 00:09:26.420 RUH Usage Desc #004: RUH Attributes: Unused 00:09:26.420 RUH Usage Desc #005: RUH Attributes: Unused 00:09:26.420 RUH Usage Desc #006: RUH Attributes: Unused 00:09:26.420 RUH Usage Desc #007: RUH Attributes: Unused 00:09:26.420 00:09:26.420 FDP statistics log page 00:09:26.420 ======================= 00:09:26.420 Host bytes with metadata written: 923291648 00:09:26.420 Media bytes with metadata written: 923463680 00:09:26.420 Media bytes erased: 0 00:09:26.420 00:09:26.420 FDP Reclaim unit handle status 00:09:26.420 ============================== 00:09:26.420 Number of RUHS descriptors: 2 00:09:26.420 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004f7b 00:09:26.420 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:26.420 00:09:26.420 FDP write on placement id: 0 success 00:09:26.420 00:09:26.420 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:26.420 00:09:26.420 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:26.420 00:09:26.420 Get Feature: FDP Events for Placement handle: #0 00:09:26.420 ======================== 00:09:26.420 Number of FDP Events: 6 00:09:26.420 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:26.420 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:26.420 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:26.420 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:26.420 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:26.420 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:26.420 00:09:26.420 FDP events log page 00:09:26.420 =================== 00:09:26.420 Number of FDP events: 1 00:09:26.420 FDP Event #0: 00:09:26.420 Event Type: RU Not Written to Capacity 00:09:26.420 Placement Identifier: Valid 00:09:26.420 NSID: Valid 00:09:26.420 Location: Valid 00:09:26.420 Placement Identifier: 0 00:09:26.420 Event Timestamp: 6 00:09:26.420 Namespace Identifier: 1 00:09:26.420 Reclaim Group Identifier: 0 00:09:26.420 Reclaim Unit Handle Identifier: 0 00:09:26.420 00:09:26.420 FDP test passed 00:09:26.420 00:09:26.420 real 0m0.248s 00:09:26.420 user 0m0.080s 00:09:26.420 sys 0m0.065s 00:09:26.420 14:46:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.420 14:46:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:26.420 ************************************ 00:09:26.420 END TEST nvme_flexible_data_placement 00:09:26.420 ************************************ 00:09:26.420 00:09:26.420 real 0m7.786s 00:09:26.420 user 0m1.017s 00:09:26.420 sys 0m1.461s 00:09:26.420 14:46:11 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.420 ************************************ 00:09:26.420 END TEST nvme_fdp 00:09:26.420 ************************************ 00:09:26.420 14:46:11 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:26.682 14:46:11 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:26.682 14:46:11 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:26.682 14:46:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.682 14:46:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.682 14:46:11 -- common/autotest_common.sh@10 -- # set +x 00:09:26.682 ************************************ 00:09:26.682 START TEST nvme_rpc 00:09:26.682 ************************************ 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:26.682 * Looking for test storage... 00:09:26.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.682 14:46:12 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.682 --rc genhtml_branch_coverage=1 00:09:26.682 --rc genhtml_function_coverage=1 00:09:26.682 --rc genhtml_legend=1 00:09:26.682 --rc geninfo_all_blocks=1 00:09:26.682 --rc geninfo_unexecuted_blocks=1 00:09:26.682 00:09:26.682 ' 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.682 --rc genhtml_branch_coverage=1 00:09:26.682 --rc genhtml_function_coverage=1 00:09:26.682 --rc genhtml_legend=1 00:09:26.682 --rc geninfo_all_blocks=1 00:09:26.682 --rc geninfo_unexecuted_blocks=1 00:09:26.682 00:09:26.682 ' 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.682 --rc genhtml_branch_coverage=1 00:09:26.682 --rc genhtml_function_coverage=1 00:09:26.682 --rc genhtml_legend=1 00:09:26.682 --rc geninfo_all_blocks=1 00:09:26.682 --rc geninfo_unexecuted_blocks=1 00:09:26.682 00:09:26.682 ' 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.682 --rc genhtml_branch_coverage=1 00:09:26.682 --rc genhtml_function_coverage=1 00:09:26.682 --rc genhtml_legend=1 00:09:26.682 --rc geninfo_all_blocks=1 00:09:26.682 --rc geninfo_unexecuted_blocks=1 00:09:26.682 00:09:26.682 ' 00:09:26.682 14:46:12 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.682 14:46:12 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:26.682 14:46:12 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:26.683 14:46:12 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:26.683 14:46:12 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:26.683 14:46:12 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:26.945 14:46:12 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:26.945 14:46:12 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:26.945 14:46:12 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:26.945 14:46:12 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:26.945 14:46:12 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65591 00:09:26.945 14:46:12 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:26.945 14:46:12 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65591 00:09:26.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.945 14:46:12 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65591 ']' 00:09:26.945 14:46:12 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:26.945 14:46:12 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.945 14:46:12 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.945 14:46:12 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.945 14:46:12 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.945 14:46:12 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.945 [2024-11-17 14:46:12.310211] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:26.945 [2024-11-17 14:46:12.310347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65591 ] 00:09:26.945 [2024-11-17 14:46:12.474095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.206 [2024-11-17 14:46:12.594844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.206 [2024-11-17 14:46:12.594862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.778 14:46:13 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.778 14:46:13 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:27.778 14:46:13 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:28.039 Nvme0n1 00:09:28.039 14:46:13 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:28.039 14:46:13 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:28.300 request: 00:09:28.300 { 00:09:28.300 "bdev_name": "Nvme0n1", 00:09:28.300 "filename": "non_existing_file", 00:09:28.300 "method": "bdev_nvme_apply_firmware", 00:09:28.300 "req_id": 1 00:09:28.300 } 00:09:28.300 Got JSON-RPC error response 00:09:28.300 response: 00:09:28.300 { 00:09:28.300 "code": -32603, 00:09:28.300 "message": "open file failed." 00:09:28.300 } 00:09:28.300 14:46:13 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:28.300 14:46:13 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:28.300 14:46:13 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:28.562 14:46:13 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:28.562 14:46:13 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65591 00:09:28.562 14:46:13 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65591 ']' 00:09:28.562 14:46:13 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65591 00:09:28.562 14:46:13 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:28.562 14:46:13 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.562 14:46:13 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65591 00:09:28.562 14:46:13 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.562 killing process with pid 65591 00:09:28.562 14:46:13 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.562 14:46:13 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65591' 00:09:28.562 14:46:13 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65591 00:09:28.562 14:46:13 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65591 00:09:30.475 00:09:30.475 real 0m3.715s 00:09:30.475 user 0m6.916s 00:09:30.475 sys 0m0.606s 00:09:30.475 14:46:15 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.475 ************************************ 00:09:30.475 END TEST nvme_rpc 00:09:30.475 14:46:15 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.475 ************************************ 00:09:30.475 14:46:15 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:30.475 14:46:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.475 14:46:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.475 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:09:30.475 ************************************ 00:09:30.475 START TEST nvme_rpc_timeouts 00:09:30.475 ************************************ 00:09:30.475 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:30.475 * Looking for test storage... 00:09:30.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:30.475 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:30.475 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:30.475 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:30.475 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.475 14:46:15 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:30.476 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.476 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:30.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.476 --rc genhtml_branch_coverage=1 00:09:30.476 --rc genhtml_function_coverage=1 00:09:30.476 --rc genhtml_legend=1 00:09:30.476 --rc geninfo_all_blocks=1 00:09:30.476 --rc geninfo_unexecuted_blocks=1 00:09:30.476 00:09:30.476 ' 00:09:30.476 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:30.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.476 --rc genhtml_branch_coverage=1 00:09:30.476 --rc genhtml_function_coverage=1 00:09:30.476 --rc genhtml_legend=1 00:09:30.476 --rc geninfo_all_blocks=1 00:09:30.476 --rc geninfo_unexecuted_blocks=1 00:09:30.476 00:09:30.476 ' 00:09:30.476 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:30.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.476 --rc genhtml_branch_coverage=1 00:09:30.476 --rc genhtml_function_coverage=1 00:09:30.476 --rc genhtml_legend=1 00:09:30.476 --rc geninfo_all_blocks=1 00:09:30.476 --rc geninfo_unexecuted_blocks=1 00:09:30.476 00:09:30.476 ' 00:09:30.476 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:30.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.476 --rc genhtml_branch_coverage=1 00:09:30.476 --rc genhtml_function_coverage=1 00:09:30.476 --rc genhtml_legend=1 00:09:30.476 --rc geninfo_all_blocks=1 00:09:30.476 --rc geninfo_unexecuted_blocks=1 00:09:30.476 00:09:30.476 ' 00:09:30.476 14:46:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.476 14:46:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65662 00:09:30.476 14:46:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65662 00:09:30.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.476 14:46:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65694 00:09:30.476 14:46:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:30.476 14:46:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65694 00:09:30.476 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65694 ']' 00:09:30.476 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.476 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.476 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.476 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.476 14:46:15 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:30.476 14:46:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:30.735 [2024-11-17 14:46:16.037769] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:30.735 [2024-11-17 14:46:16.037936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65694 ] 00:09:30.735 [2024-11-17 14:46:16.200359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:30.995 [2024-11-17 14:46:16.359716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.995 [2024-11-17 14:46:16.359837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.938 14:46:17 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.938 Checking default timeout settings: 00:09:31.938 14:46:17 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:31.938 14:46:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:31.939 14:46:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:32.200 Making settings changes with rpc: 00:09:32.200 14:46:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:32.200 14:46:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:32.200 Check default vs. modified settings: 00:09:32.200 14:46:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:32.200 14:46:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65662 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65662 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.771 Setting action_on_timeout is changed as expected. 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65662 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65662 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.771 Setting timeout_us is changed as expected. 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65662 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65662 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:32.771 Setting timeout_admin_us is changed as expected. 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65662 /tmp/settings_modified_65662 00:09:32.771 14:46:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65694 00:09:32.772 14:46:18 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65694 ']' 00:09:32.772 14:46:18 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65694 00:09:32.772 14:46:18 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:32.772 14:46:18 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.772 14:46:18 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65694 00:09:32.772 killing process with pid 65694 00:09:32.772 14:46:18 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.772 14:46:18 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.772 14:46:18 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65694' 00:09:32.772 14:46:18 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65694 00:09:32.772 14:46:18 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65694 00:09:34.682 RPC TIMEOUT SETTING TEST PASSED. 00:09:34.682 14:46:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:34.682 ************************************ 00:09:34.682 END TEST nvme_rpc_timeouts 00:09:34.682 ************************************ 00:09:34.682 00:09:34.682 real 0m3.937s 00:09:34.682 user 0m7.410s 00:09:34.682 sys 0m0.738s 00:09:34.682 14:46:19 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.682 14:46:19 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:34.682 14:46:19 -- spdk/autotest.sh@239 -- # uname -s 00:09:34.682 14:46:19 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:34.682 14:46:19 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:34.682 14:46:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.682 14:46:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.682 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:09:34.682 ************************************ 00:09:34.682 START TEST sw_hotplug 00:09:34.682 ************************************ 00:09:34.682 14:46:19 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:34.682 * Looking for test storage... 00:09:34.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:34.682 14:46:19 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.682 14:46:19 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.682 14:46:19 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.682 14:46:19 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.682 14:46:19 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:34.682 14:46:19 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.682 14:46:19 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.682 --rc genhtml_branch_coverage=1 00:09:34.682 --rc genhtml_function_coverage=1 00:09:34.682 --rc genhtml_legend=1 00:09:34.682 --rc geninfo_all_blocks=1 00:09:34.682 --rc geninfo_unexecuted_blocks=1 00:09:34.682 00:09:34.682 ' 00:09:34.682 14:46:19 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.682 --rc genhtml_branch_coverage=1 00:09:34.682 --rc genhtml_function_coverage=1 00:09:34.682 --rc genhtml_legend=1 00:09:34.682 --rc geninfo_all_blocks=1 00:09:34.682 --rc geninfo_unexecuted_blocks=1 00:09:34.682 00:09:34.682 ' 00:09:34.682 14:46:19 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.682 --rc genhtml_branch_coverage=1 00:09:34.682 --rc genhtml_function_coverage=1 00:09:34.682 --rc genhtml_legend=1 00:09:34.682 --rc geninfo_all_blocks=1 00:09:34.682 --rc geninfo_unexecuted_blocks=1 00:09:34.682 00:09:34.682 ' 00:09:34.682 14:46:19 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.682 --rc genhtml_branch_coverage=1 00:09:34.682 --rc genhtml_function_coverage=1 00:09:34.682 --rc genhtml_legend=1 00:09:34.682 --rc geninfo_all_blocks=1 00:09:34.682 --rc geninfo_unexecuted_blocks=1 00:09:34.682 00:09:34.682 ' 00:09:34.682 14:46:19 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:34.944 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.944 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:34.944 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:34.944 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:34.944 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:34.944 14:46:20 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:34.944 14:46:20 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:34.944 14:46:20 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:34.944 14:46:20 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:34.944 14:46:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:35.204 14:46:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.204 14:46:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.204 14:46:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.204 14:46:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.204 14:46:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:35.204 14:46:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.204 14:46:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.204 14:46:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.204 14:46:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.205 14:46:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:35.205 14:46:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.205 14:46:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.205 14:46:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.205 14:46:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.205 14:46:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:35.205 14:46:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.205 14:46:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.205 14:46:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.205 14:46:20 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:35.205 14:46:20 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:35.205 14:46:20 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:35.205 14:46:20 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:35.205 14:46:20 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:35.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.466 Waiting for block devices as requested 00:09:35.727 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.727 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.727 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.987 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.269 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:41.269 14:46:26 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:41.269 14:46:26 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:41.269 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:41.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.269 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:41.527 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:41.786 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.786 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:42.046 14:46:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66556 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:42.046 14:46:27 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:42.046 14:46:27 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:42.046 14:46:27 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:42.046 14:46:27 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:42.046 14:46:27 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:42.046 14:46:27 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:42.308 Initializing NVMe Controllers 00:09:42.308 Attaching to 0000:00:10.0 00:09:42.308 Attaching to 0000:00:11.0 00:09:42.308 Attached to 0000:00:10.0 00:09:42.308 Attached to 0000:00:11.0 00:09:42.308 Initialization complete. Starting I/O... 00:09:42.308 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:42.308 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:42.308 00:09:43.280 QEMU NVMe Ctrl (12340 ): 2128 I/Os completed (+2128) 00:09:43.280 QEMU NVMe Ctrl (12341 ): 2135 I/Os completed (+2135) 00:09:43.280 00:09:44.227 QEMU NVMe Ctrl (12340 ): 4688 I/Os completed (+2560) 00:09:44.227 QEMU NVMe Ctrl (12341 ): 4695 I/Os completed (+2560) 00:09:44.227 00:09:45.170 QEMU NVMe Ctrl (12340 ): 7212 I/Os completed (+2524) 00:09:45.170 QEMU NVMe Ctrl (12341 ): 7219 I/Os completed (+2524) 00:09:45.170 00:09:46.109 QEMU NVMe Ctrl (12340 ): 10254 I/Os completed (+3042) 00:09:46.109 QEMU NVMe Ctrl (12341 ): 10265 I/Os completed (+3046) 00:09:46.109 00:09:47.489 QEMU NVMe Ctrl (12340 ): 13919 I/Os completed (+3665) 00:09:47.489 QEMU NVMe Ctrl (12341 ): 13948 I/Os completed (+3683) 00:09:47.489 00:09:48.058 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:48.058 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:48.058 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:48.058 [2024-11-17 14:46:33.445279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:48.058 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:48.058 [2024-11-17 14:46:33.446255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 [2024-11-17 14:46:33.446291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 [2024-11-17 14:46:33.446305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 [2024-11-17 14:46:33.446322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:48.058 [2024-11-17 14:46:33.448173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 [2024-11-17 14:46:33.448219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 [2024-11-17 14:46:33.448233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 [2024-11-17 14:46:33.448247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:48.058 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:48.058 [2024-11-17 14:46:33.464409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:48.058 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:48.058 [2024-11-17 14:46:33.465306] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 [2024-11-17 14:46:33.465377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 [2024-11-17 14:46:33.465400] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 [2024-11-17 14:46:33.465417] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:48.058 [2024-11-17 14:46:33.466801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 [2024-11-17 14:46:33.466833] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 [2024-11-17 14:46:33.466845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 [2024-11-17 14:46:33.466856] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.058 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:48.058 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:48.058 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:48.058 EAL: Scan for (pci) bus failed. 00:09:48.058 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:48.058 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:48.058 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:48.317 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:48.317 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:48.318 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:48.318 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:48.318 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:48.318 Attaching to 0000:00:10.0 00:09:48.318 Attached to 0000:00:10.0 00:09:48.318 QEMU NVMe Ctrl (12340 ): 28 I/Os completed (+28) 00:09:48.318 00:09:48.318 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:48.318 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:48.318 14:46:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:48.318 Attaching to 0000:00:11.0 00:09:48.318 Attached to 0000:00:11.0 00:09:49.255 QEMU NVMe Ctrl (12340 ): 3730 I/Os completed (+3702) 00:09:49.255 QEMU NVMe Ctrl (12341 ): 3426 I/Os completed (+3426) 00:09:49.255 00:09:50.194 QEMU NVMe Ctrl (12340 ): 7144 I/Os completed (+3414) 00:09:50.194 QEMU NVMe Ctrl (12341 ): 6918 I/Os completed (+3492) 00:09:50.194 00:09:51.137 QEMU NVMe Ctrl (12340 ): 9784 I/Os completed (+2640) 00:09:51.137 QEMU NVMe Ctrl (12341 ): 9561 I/Os completed (+2643) 00:09:51.137 00:09:52.523 QEMU NVMe Ctrl (12340 ): 12432 I/Os completed (+2648) 00:09:52.523 QEMU NVMe Ctrl (12341 ): 12213 I/Os completed (+2652) 00:09:52.523 00:09:53.465 QEMU NVMe Ctrl (12340 ): 15136 I/Os completed (+2704) 00:09:53.465 QEMU NVMe Ctrl (12341 ): 14917 I/Os completed (+2704) 00:09:53.465 00:09:54.408 QEMU NVMe Ctrl (12340 ): 18430 I/Os completed (+3294) 00:09:54.408 QEMU NVMe Ctrl (12341 ): 18213 I/Os completed (+3296) 00:09:54.408 00:09:55.349 QEMU NVMe Ctrl (12340 ): 21777 I/Os completed (+3347) 00:09:55.349 QEMU NVMe Ctrl (12341 ): 21572 I/Os completed (+3359) 00:09:55.349 00:09:56.292 QEMU NVMe Ctrl (12340 ): 24453 I/Os completed (+2676) 00:09:56.292 QEMU NVMe Ctrl (12341 ): 24248 I/Os completed (+2676) 00:09:56.292 00:09:57.239 QEMU NVMe Ctrl (12340 ): 27297 I/Os completed (+2844) 00:09:57.240 QEMU NVMe Ctrl (12341 ): 27100 I/Os completed (+2852) 00:09:57.240 00:09:58.178 QEMU NVMe Ctrl (12340 ): 30954 I/Os completed (+3657) 00:09:58.178 QEMU NVMe Ctrl (12341 ): 30762 I/Os completed (+3662) 00:09:58.178 00:09:59.117 QEMU NVMe Ctrl (12340 ): 34684 I/Os completed (+3730) 00:09:59.117 QEMU NVMe Ctrl (12341 ): 34521 I/Os completed (+3759) 00:09:59.117 00:10:00.502 QEMU NVMe Ctrl (12340 ): 37854 I/Os completed (+3170) 00:10:00.502 QEMU NVMe Ctrl (12341 ): 37705 I/Os completed (+3184) 00:10:00.502 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.502 [2024-11-17 14:46:45.713700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:00.502 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:00.502 [2024-11-17 14:46:45.715998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 [2024-11-17 14:46:45.716068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 [2024-11-17 14:46:45.716091] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 [2024-11-17 14:46:45.716111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:00.502 [2024-11-17 14:46:45.718281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 [2024-11-17 14:46:45.718351] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 [2024-11-17 14:46:45.718367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 [2024-11-17 14:46:45.718382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:00.502 EAL: Scan for (pci) bus failed. 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.502 [2024-11-17 14:46:45.741200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:00.502 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:00.502 [2024-11-17 14:46:45.742614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 [2024-11-17 14:46:45.742676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 [2024-11-17 14:46:45.742699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 [2024-11-17 14:46:45.742715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:00.502 [2024-11-17 14:46:45.744890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 [2024-11-17 14:46:45.745082] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 [2024-11-17 14:46:45.745109] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 [2024-11-17 14:46:45.745126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:00.502 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:00.502 EAL: Scan for (pci) bus failed. 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:00.502 Attaching to 0000:00:10.0 00:10:00.502 Attached to 0000:00:10.0 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.502 14:46:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:00.502 Attaching to 0000:00:11.0 00:10:00.502 Attached to 0000:00:11.0 00:10:01.444 QEMU NVMe Ctrl (12340 ): 1920 I/Os completed (+1920) 00:10:01.444 QEMU NVMe Ctrl (12341 ): 1700 I/Os completed (+1700) 00:10:01.444 00:10:02.383 QEMU NVMe Ctrl (12340 ): 4636 I/Os completed (+2716) 00:10:02.383 QEMU NVMe Ctrl (12341 ): 4416 I/Os completed (+2716) 00:10:02.383 00:10:03.323 QEMU NVMe Ctrl (12340 ): 8255 I/Os completed (+3619) 00:10:03.323 QEMU NVMe Ctrl (12341 ): 8027 I/Os completed (+3611) 00:10:03.323 00:10:04.259 QEMU NVMe Ctrl (12340 ): 11958 I/Os completed (+3703) 00:10:04.259 QEMU NVMe Ctrl (12341 ): 11741 I/Os completed (+3714) 00:10:04.259 00:10:05.196 QEMU NVMe Ctrl (12340 ): 15636 I/Os completed (+3678) 00:10:05.196 QEMU NVMe Ctrl (12341 ): 15413 I/Os completed (+3672) 00:10:05.196 00:10:06.135 QEMU NVMe Ctrl (12340 ): 19324 I/Os completed (+3688) 00:10:06.135 QEMU NVMe Ctrl (12341 ): 19109 I/Os completed (+3696) 00:10:06.135 00:10:07.521 QEMU NVMe Ctrl (12340 ): 21940 I/Os completed (+2616) 00:10:07.521 QEMU NVMe Ctrl (12341 ): 21730 I/Os completed (+2621) 00:10:07.521 00:10:08.465 QEMU NVMe Ctrl (12340 ): 24536 I/Os completed (+2596) 00:10:08.465 QEMU NVMe Ctrl (12341 ): 24326 I/Os completed (+2596) 00:10:08.465 00:10:09.407 QEMU NVMe Ctrl (12340 ): 27252 I/Os completed (+2716) 00:10:09.407 QEMU NVMe Ctrl (12341 ): 27043 I/Os completed (+2717) 00:10:09.407 00:10:10.346 QEMU NVMe Ctrl (12340 ): 30810 I/Os completed (+3558) 00:10:10.346 QEMU NVMe Ctrl (12341 ): 30598 I/Os completed (+3555) 00:10:10.346 00:10:11.289 QEMU NVMe Ctrl (12340 ): 34545 I/Os completed (+3735) 00:10:11.289 QEMU NVMe Ctrl (12341 ): 34328 I/Os completed (+3730) 00:10:11.289 00:10:12.233 QEMU NVMe Ctrl (12340 ): 37999 I/Os completed (+3454) 00:10:12.233 QEMU NVMe Ctrl (12341 ): 37793 I/Os completed (+3465) 00:10:12.233 00:10:12.534 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:12.534 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:12.534 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.534 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.534 [2024-11-17 14:46:58.005183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:12.534 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:12.534 [2024-11-17 14:46:58.006686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 [2024-11-17 14:46:58.006894] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 [2024-11-17 14:46:58.006953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 [2024-11-17 14:46:58.007066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:12.534 [2024-11-17 14:46:58.009899] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 [2024-11-17 14:46:58.010099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 [2024-11-17 14:46:58.010143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 [2024-11-17 14:46:58.010345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.534 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.534 [2024-11-17 14:46:58.025404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:12.534 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:12.534 [2024-11-17 14:46:58.027445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 [2024-11-17 14:46:58.027514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 [2024-11-17 14:46:58.027535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 [2024-11-17 14:46:58.027551] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:12.534 [2024-11-17 14:46:58.029474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 [2024-11-17 14:46:58.029531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 [2024-11-17 14:46:58.029550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 [2024-11-17 14:46:58.029564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.534 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:12.534 EAL: Scan for (pci) bus failed. 00:10:12.534 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:12.534 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:12.816 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.816 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.816 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:12.816 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:12.816 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.816 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.816 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.816 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:12.816 Attaching to 0000:00:10.0 00:10:12.816 Attached to 0000:00:10.0 00:10:12.816 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:12.816 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.816 14:46:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:13.087 Attaching to 0000:00:11.0 00:10:13.087 Attached to 0000:00:11.0 00:10:13.088 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:13.088 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:13.088 [2024-11-17 14:46:58.357357] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:25.325 14:47:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:25.325 14:47:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:25.325 14:47:10 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.91 00:10:25.325 14:47:10 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.91 00:10:25.325 14:47:10 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:25.325 14:47:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.91 00:10:25.325 14:47:10 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.91 2 00:10:25.325 remove_attach_helper took 42.91s to complete (handling 2 nvme drive(s)) 14:47:10 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:31.917 14:47:16 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66556 00:10:31.917 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66556) - No such process 00:10:31.917 14:47:16 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66556 00:10:31.917 14:47:16 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:31.917 14:47:16 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:31.917 14:47:16 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:31.917 14:47:16 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67104 00:10:31.917 14:47:16 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:31.917 14:47:16 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:31.917 14:47:16 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67104 00:10:31.917 14:47:16 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67104 ']' 00:10:31.917 14:47:16 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.917 14:47:16 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.917 14:47:16 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.917 14:47:16 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.917 14:47:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:31.917 [2024-11-17 14:47:16.448390] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:31.917 [2024-11-17 14:47:16.448542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67104 ] 00:10:31.917 [2024-11-17 14:47:16.606267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.917 [2024-11-17 14:47:16.731365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.917 14:47:17 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.917 14:47:17 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:31.917 14:47:17 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:31.917 14:47:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.917 14:47:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:31.917 14:47:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.917 14:47:17 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:31.917 14:47:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:31.917 14:47:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:31.917 14:47:17 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:31.917 14:47:17 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:31.917 14:47:17 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:31.917 14:47:17 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:31.917 14:47:17 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:31.917 14:47:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:31.917 14:47:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:31.917 14:47:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:31.917 14:47:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:31.917 14:47:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.507 14:47:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.507 14:47:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.507 14:47:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:38.507 14:47:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:38.507 [2024-11-17 14:47:23.515821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:38.507 [2024-11-17 14:47:23.517021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.507 [2024-11-17 14:47:23.517055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.507 [2024-11-17 14:47:23.517068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.507 [2024-11-17 14:47:23.517086] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.507 [2024-11-17 14:47:23.517094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.507 [2024-11-17 14:47:23.517102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.507 [2024-11-17 14:47:23.517109] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.507 [2024-11-17 14:47:23.517117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.507 [2024-11-17 14:47:23.517124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.507 [2024-11-17 14:47:23.517135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.507 [2024-11-17 14:47:23.517141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.507 [2024-11-17 14:47:23.517149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.507 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:38.507 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.507 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.507 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.507 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.507 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.507 14:47:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.507 14:47:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.507 [2024-11-17 14:47:24.015814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:38.507 [2024-11-17 14:47:24.016966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.507 [2024-11-17 14:47:24.016994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.507 [2024-11-17 14:47:24.017004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.507 [2024-11-17 14:47:24.017018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.507 [2024-11-17 14:47:24.017026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.507 [2024-11-17 14:47:24.017033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.507 [2024-11-17 14:47:24.017042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.507 [2024-11-17 14:47:24.017049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.507 [2024-11-17 14:47:24.017056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.507 [2024-11-17 14:47:24.017063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.508 [2024-11-17 14:47:24.017071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.508 [2024-11-17 14:47:24.017077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.508 14:47:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.508 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:38.508 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:39.079 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:39.079 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:39.079 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:39.079 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:39.079 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:39.079 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:39.079 14:47:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.079 14:47:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:39.079 14:47:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.079 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:39.079 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:39.339 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.339 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.339 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:39.339 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:39.339 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.339 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.339 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.339 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:39.339 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:39.339 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.339 14:47:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.577 14:47:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.577 14:47:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.577 14:47:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.577 14:47:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.577 14:47:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.577 14:47:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:51.577 14:47:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:51.577 [2024-11-17 14:47:36.916041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:51.577 [2024-11-17 14:47:36.917295] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.577 [2024-11-17 14:47:36.917329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.577 [2024-11-17 14:47:36.917340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.577 [2024-11-17 14:47:36.917357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.577 [2024-11-17 14:47:36.917364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.577 [2024-11-17 14:47:36.917373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.577 [2024-11-17 14:47:36.917380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.577 [2024-11-17 14:47:36.917388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.577 [2024-11-17 14:47:36.917395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.577 [2024-11-17 14:47:36.917403] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.577 [2024-11-17 14:47:36.917409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.577 [2024-11-17 14:47:36.917417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.839 [2024-11-17 14:47:37.316038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:51.839 [2024-11-17 14:47:37.317197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.839 [2024-11-17 14:47:37.317227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.839 [2024-11-17 14:47:37.317239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.839 [2024-11-17 14:47:37.317254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.839 [2024-11-17 14:47:37.317262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.839 [2024-11-17 14:47:37.317269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.839 [2024-11-17 14:47:37.317278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.839 [2024-11-17 14:47:37.317284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.839 [2024-11-17 14:47:37.317292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.839 [2024-11-17 14:47:37.317299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.839 [2024-11-17 14:47:37.317307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.839 [2024-11-17 14:47:37.317313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:52.106 14:47:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.106 14:47:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.106 14:47:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.106 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:52.373 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:52.373 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.373 14:47:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.610 14:47:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.610 14:47:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.610 14:47:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.610 14:47:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.610 14:47:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.610 14:47:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:04.610 14:47:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:04.610 [2024-11-17 14:47:49.816425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:04.610 [2024-11-17 14:47:49.817682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.610 [2024-11-17 14:47:49.817780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.610 [2024-11-17 14:47:49.817839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.610 [2024-11-17 14:47:49.817891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.610 [2024-11-17 14:47:49.817909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.610 [2024-11-17 14:47:49.817946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.610 [2024-11-17 14:47:49.818013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.610 [2024-11-17 14:47:49.818033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.610 [2024-11-17 14:47:49.818055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.610 [2024-11-17 14:47:49.818080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.610 [2024-11-17 14:47:49.818096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.610 [2024-11-17 14:47:49.818248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.872 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:04.872 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.872 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.872 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.872 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.872 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.872 14:47:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.872 14:47:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.872 [2024-11-17 14:47:50.316432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:04.872 [2024-11-17 14:47:50.317717] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.872 [2024-11-17 14:47:50.317820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.872 [2024-11-17 14:47:50.317883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.872 [2024-11-17 14:47:50.318018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.872 [2024-11-17 14:47:50.318039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.872 [2024-11-17 14:47:50.318063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.872 [2024-11-17 14:47:50.318088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.872 [2024-11-17 14:47:50.318136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.872 [2024-11-17 14:47:50.318165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.872 [2024-11-17 14:47:50.318188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.872 [2024-11-17 14:47:50.318206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.872 [2024-11-17 14:47:50.318253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.872 14:47:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.872 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:04.872 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:05.134 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.134 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.134 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:05.134 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:05.134 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.134 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.134 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.134 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:05.134 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:05.134 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.134 14:47:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.19 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.19 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:11:17.373 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:17.373 14:48:02 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:17.373 14:48:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.965 14:48:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.965 14:48:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.965 14:48:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:23.965 14:48:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.965 [2024-11-17 14:48:08.738247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:23.965 [2024-11-17 14:48:08.739366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.965 [2024-11-17 14:48:08.739401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.965 [2024-11-17 14:48:08.739412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.965 [2024-11-17 14:48:08.739430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.965 [2024-11-17 14:48:08.739438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.965 [2024-11-17 14:48:08.739447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.965 [2024-11-17 14:48:08.739454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.965 [2024-11-17 14:48:08.739462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.965 [2024-11-17 14:48:08.739468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.965 [2024-11-17 14:48:08.739476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.965 [2024-11-17 14:48:08.739483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.965 [2024-11-17 14:48:08.739492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.965 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:23.965 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.965 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.965 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.965 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.965 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.965 14:48:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.965 14:48:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.965 [2024-11-17 14:48:09.238247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:23.965 14:48:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.965 [2024-11-17 14:48:09.239125] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.965 [2024-11-17 14:48:09.239147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.965 [2024-11-17 14:48:09.239159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.965 [2024-11-17 14:48:09.239172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.965 [2024-11-17 14:48:09.239181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.965 [2024-11-17 14:48:09.239188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.965 [2024-11-17 14:48:09.239197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.965 [2024-11-17 14:48:09.239204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.965 [2024-11-17 14:48:09.239212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.965 [2024-11-17 14:48:09.239218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.965 [2024-11-17 14:48:09.239226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.965 [2024-11-17 14:48:09.239232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.965 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:23.965 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.226 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:24.226 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.226 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.226 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.226 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.226 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.226 14:48:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.226 14:48:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.488 14:48:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.488 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:24.488 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:24.488 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.488 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.488 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:24.488 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:24.488 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.488 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.488 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.488 14:48:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:24.488 14:48:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.488 14:48:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.488 14:48:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.781 14:48:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.781 14:48:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.781 14:48:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.781 14:48:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.781 14:48:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.781 14:48:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:36.781 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.781 [2024-11-17 14:48:22.138461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:36.781 [2024-11-17 14:48:22.139656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.781 [2024-11-17 14:48:22.139691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.781 [2024-11-17 14:48:22.139702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.781 [2024-11-17 14:48:22.139719] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.781 [2024-11-17 14:48:22.139726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.781 [2024-11-17 14:48:22.139735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.781 [2024-11-17 14:48:22.139742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.781 [2024-11-17 14:48:22.139750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.781 [2024-11-17 14:48:22.139757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.781 [2024-11-17 14:48:22.139767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.781 [2024-11-17 14:48:22.139773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.781 [2024-11-17 14:48:22.139781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.353 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:37.353 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:37.353 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:37.353 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.353 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.353 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.353 14:48:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.353 14:48:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.353 14:48:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.353 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:37.353 14:48:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:37.353 [2024-11-17 14:48:22.838468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:37.353 [2024-11-17 14:48:22.839334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.353 [2024-11-17 14:48:22.839362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.353 [2024-11-17 14:48:22.839373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.353 [2024-11-17 14:48:22.839385] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.353 [2024-11-17 14:48:22.839397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.353 [2024-11-17 14:48:22.839404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.353 [2024-11-17 14:48:22.839414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.353 [2024-11-17 14:48:22.839421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.353 [2024-11-17 14:48:22.839429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.353 [2024-11-17 14:48:22.839436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.353 [2024-11-17 14:48:22.839444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.353 [2024-11-17 14:48:22.839450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.925 14:48:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.925 14:48:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.925 14:48:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.925 14:48:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:50.161 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:50.161 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:50.161 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:50.161 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.162 14:48:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.162 14:48:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.162 14:48:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.162 14:48:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.162 14:48:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.162 14:48:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:50.162 14:48:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:50.162 [2024-11-17 14:48:35.538879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:50.162 [2024-11-17 14:48:35.541261] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.162 [2024-11-17 14:48:35.541291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.162 [2024-11-17 14:48:35.541301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.162 [2024-11-17 14:48:35.541317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.162 [2024-11-17 14:48:35.541324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.162 [2024-11-17 14:48:35.541332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.162 [2024-11-17 14:48:35.541340] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.162 [2024-11-17 14:48:35.541351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.162 [2024-11-17 14:48:35.541358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.162 [2024-11-17 14:48:35.541367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.162 [2024-11-17 14:48:35.541373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.162 [2024-11-17 14:48:35.541381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.423 [2024-11-17 14:48:35.938888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:50.423 [2024-11-17 14:48:35.939761] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.423 [2024-11-17 14:48:35.939789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.423 [2024-11-17 14:48:35.939800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.423 [2024-11-17 14:48:35.939812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.424 [2024-11-17 14:48:35.939821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.424 [2024-11-17 14:48:35.939828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.424 [2024-11-17 14:48:35.939836] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.424 [2024-11-17 14:48:35.939843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.424 [2024-11-17 14:48:35.939851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.424 [2024-11-17 14:48:35.939858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.424 [2024-11-17 14:48:35.939868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.424 [2024-11-17 14:48:35.939874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.685 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:50.685 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:50.685 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:50.685 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.685 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.685 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.685 14:48:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.685 14:48:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.685 14:48:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.685 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:50.685 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:50.685 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:50.685 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:50.685 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:50.685 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:50.947 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.947 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:50.947 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:50.947 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:50.947 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:50.947 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.947 14:48:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:03.184 14:48:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:03.184 14:48:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:03.184 14:48:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:03.184 14:48:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:03.184 14:48:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:03.184 14:48:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.184 14:48:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:03.184 14:48:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.72 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.72 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:03.184 14:48:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.72 00:12:03.184 14:48:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.72 2 00:12:03.184 remove_attach_helper took 45.72s to complete (handling 2 nvme drive(s)) 14:48:48 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:03.184 14:48:48 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67104 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67104 ']' 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67104 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67104 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67104' 00:12:03.184 killing process with pid 67104 00:12:03.184 14:48:48 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67104 00:12:03.185 14:48:48 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67104 00:12:04.128 14:48:49 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:04.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:04.961 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:04.961 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:04.961 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:04.961 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:05.223 00:12:05.223 real 2m30.732s 00:12:05.223 user 1m52.429s 00:12:05.223 sys 0m16.795s 00:12:05.223 14:48:50 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.223 ************************************ 00:12:05.223 14:48:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.223 END TEST sw_hotplug 00:12:05.223 ************************************ 00:12:05.223 14:48:50 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:05.223 14:48:50 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:05.223 14:48:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:05.223 14:48:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.223 14:48:50 -- common/autotest_common.sh@10 -- # set +x 00:12:05.223 ************************************ 00:12:05.223 START TEST nvme_xnvme 00:12:05.223 ************************************ 00:12:05.223 14:48:50 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:05.223 * Looking for test storage... 00:12:05.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:05.223 14:48:50 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:05.223 14:48:50 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:05.223 14:48:50 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:05.223 14:48:50 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.223 14:48:50 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:05.223 14:48:50 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.223 14:48:50 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:05.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.223 --rc genhtml_branch_coverage=1 00:12:05.223 --rc genhtml_function_coverage=1 00:12:05.223 --rc genhtml_legend=1 00:12:05.223 --rc geninfo_all_blocks=1 00:12:05.223 --rc geninfo_unexecuted_blocks=1 00:12:05.224 00:12:05.224 ' 00:12:05.224 14:48:50 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:05.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.224 --rc genhtml_branch_coverage=1 00:12:05.224 --rc genhtml_function_coverage=1 00:12:05.224 --rc genhtml_legend=1 00:12:05.224 --rc geninfo_all_blocks=1 00:12:05.224 --rc geninfo_unexecuted_blocks=1 00:12:05.224 00:12:05.224 ' 00:12:05.224 14:48:50 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:05.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.224 --rc genhtml_branch_coverage=1 00:12:05.224 --rc genhtml_function_coverage=1 00:12:05.224 --rc genhtml_legend=1 00:12:05.224 --rc geninfo_all_blocks=1 00:12:05.224 --rc geninfo_unexecuted_blocks=1 00:12:05.224 00:12:05.224 ' 00:12:05.224 14:48:50 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:05.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.224 --rc genhtml_branch_coverage=1 00:12:05.224 --rc genhtml_function_coverage=1 00:12:05.224 --rc genhtml_legend=1 00:12:05.224 --rc geninfo_all_blocks=1 00:12:05.224 --rc geninfo_unexecuted_blocks=1 00:12:05.224 00:12:05.224 ' 00:12:05.224 14:48:50 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.224 14:48:50 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.224 14:48:50 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.224 14:48:50 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.224 14:48:50 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.224 14:48:50 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.224 14:48:50 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.224 14:48:50 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.224 14:48:50 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:05.224 14:48:50 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.224 14:48:50 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:05.224 14:48:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:05.224 14:48:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.224 14:48:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:05.486 ************************************ 00:12:05.486 START TEST xnvme_to_malloc_dd_copy 00:12:05.486 ************************************ 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:05.486 14:48:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:05.486 { 00:12:05.486 "subsystems": [ 00:12:05.486 { 00:12:05.486 "subsystem": "bdev", 00:12:05.486 "config": [ 00:12:05.486 { 00:12:05.486 "params": { 00:12:05.486 "block_size": 512, 00:12:05.486 "num_blocks": 2097152, 00:12:05.486 "name": "malloc0" 00:12:05.486 }, 00:12:05.486 "method": "bdev_malloc_create" 00:12:05.486 }, 00:12:05.486 { 00:12:05.486 "params": { 00:12:05.486 "io_mechanism": "libaio", 00:12:05.486 "filename": "/dev/nullb0", 00:12:05.486 "name": "null0" 00:12:05.486 }, 00:12:05.486 "method": "bdev_xnvme_create" 00:12:05.486 }, 00:12:05.486 { 00:12:05.486 "method": "bdev_wait_for_examine" 00:12:05.486 } 00:12:05.486 ] 00:12:05.486 } 00:12:05.486 ] 00:12:05.486 } 00:12:05.486 [2024-11-17 14:48:50.878734] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:05.486 [2024-11-17 14:48:50.878878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68503 ] 00:12:05.748 [2024-11-17 14:48:51.042549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.748 [2024-11-17 14:48:51.161902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.298  [2024-11-17T14:48:54.413Z] Copying: 224/1024 [MB] (224 MBps) [2024-11-17T14:48:55.357Z] Copying: 448/1024 [MB] (224 MBps) [2024-11-17T14:48:56.297Z] Copying: 736/1024 [MB] (287 MBps) [2024-11-17T14:48:58.214Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:12:12.671 00:12:12.671 14:48:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:12.671 14:48:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:12.671 14:48:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:12.671 14:48:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:12.671 { 00:12:12.671 "subsystems": [ 00:12:12.671 { 00:12:12.671 "subsystem": "bdev", 00:12:12.671 "config": [ 00:12:12.671 { 00:12:12.671 "params": { 00:12:12.671 "block_size": 512, 00:12:12.671 "num_blocks": 2097152, 00:12:12.671 "name": "malloc0" 00:12:12.671 }, 00:12:12.671 "method": "bdev_malloc_create" 00:12:12.671 }, 00:12:12.671 { 00:12:12.671 "params": { 00:12:12.671 "io_mechanism": "libaio", 00:12:12.671 "filename": "/dev/nullb0", 00:12:12.671 "name": "null0" 00:12:12.671 }, 00:12:12.671 "method": "bdev_xnvme_create" 00:12:12.671 }, 00:12:12.671 { 00:12:12.671 "method": "bdev_wait_for_examine" 00:12:12.671 } 00:12:12.671 ] 00:12:12.671 } 00:12:12.671 ] 00:12:12.671 } 00:12:12.934 [2024-11-17 14:48:58.216238] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:12.935 [2024-11-17 14:48:58.216354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68585 ] 00:12:12.935 [2024-11-17 14:48:58.372956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.935 [2024-11-17 14:48:58.450408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.860  [2024-11-17T14:49:01.346Z] Copying: 301/1024 [MB] (301 MBps) [2024-11-17T14:49:02.289Z] Copying: 604/1024 [MB] (302 MBps) [2024-11-17T14:49:02.861Z] Copying: 907/1024 [MB] (303 MBps) [2024-11-17T14:49:04.778Z] Copying: 1024/1024 [MB] (average 302 MBps) 00:12:19.235 00:12:19.235 14:49:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:19.235 14:49:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:19.235 14:49:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:19.235 14:49:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:19.235 14:49:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:19.235 14:49:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:19.235 { 00:12:19.235 "subsystems": [ 00:12:19.235 { 00:12:19.235 "subsystem": "bdev", 00:12:19.235 "config": [ 00:12:19.235 { 00:12:19.235 "params": { 00:12:19.235 "block_size": 512, 00:12:19.235 "num_blocks": 2097152, 00:12:19.235 "name": "malloc0" 00:12:19.235 }, 00:12:19.235 "method": "bdev_malloc_create" 00:12:19.235 }, 00:12:19.235 { 00:12:19.235 "params": { 00:12:19.235 "io_mechanism": "io_uring", 00:12:19.235 "filename": "/dev/nullb0", 00:12:19.235 "name": "null0" 00:12:19.235 }, 00:12:19.235 "method": "bdev_xnvme_create" 00:12:19.235 }, 00:12:19.235 { 00:12:19.235 "method": "bdev_wait_for_examine" 00:12:19.235 } 00:12:19.235 ] 00:12:19.235 } 00:12:19.235 ] 00:12:19.235 } 00:12:19.235 [2024-11-17 14:49:04.561037] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:19.235 [2024-11-17 14:49:04.561153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68661 ] 00:12:19.235 [2024-11-17 14:49:04.715488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.496 [2024-11-17 14:49:04.793675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.412  [2024-11-17T14:49:07.900Z] Copying: 308/1024 [MB] (308 MBps) [2024-11-17T14:49:08.607Z] Copying: 618/1024 [MB] (309 MBps) [2024-11-17T14:49:08.868Z] Copying: 927/1024 [MB] (309 MBps) [2024-11-17T14:49:10.780Z] Copying: 1024/1024 [MB] (average 309 MBps) 00:12:25.237 00:12:25.237 14:49:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:25.237 14:49:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:25.237 14:49:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:25.237 14:49:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:25.237 { 00:12:25.237 "subsystems": [ 00:12:25.237 { 00:12:25.237 "subsystem": "bdev", 00:12:25.237 "config": [ 00:12:25.237 { 00:12:25.237 "params": { 00:12:25.237 "block_size": 512, 00:12:25.237 "num_blocks": 2097152, 00:12:25.237 "name": "malloc0" 00:12:25.237 }, 00:12:25.237 "method": "bdev_malloc_create" 00:12:25.237 }, 00:12:25.237 { 00:12:25.237 "params": { 00:12:25.237 "io_mechanism": "io_uring", 00:12:25.237 "filename": "/dev/nullb0", 00:12:25.237 "name": "null0" 00:12:25.237 }, 00:12:25.237 "method": "bdev_xnvme_create" 00:12:25.237 }, 00:12:25.237 { 00:12:25.237 "method": "bdev_wait_for_examine" 00:12:25.237 } 00:12:25.237 ] 00:12:25.237 } 00:12:25.237 ] 00:12:25.237 } 00:12:25.498 [2024-11-17 14:49:10.801584] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:25.498 [2024-11-17 14:49:10.801700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68737 ] 00:12:25.498 [2024-11-17 14:49:10.957396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.758 [2024-11-17 14:49:11.039253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.671  [2024-11-17T14:49:13.785Z] Copying: 314/1024 [MB] (314 MBps) [2024-11-17T14:49:15.170Z] Copying: 630/1024 [MB] (315 MBps) [2024-11-17T14:49:15.170Z] Copying: 945/1024 [MB] (315 MBps) [2024-11-17T14:49:17.086Z] Copying: 1024/1024 [MB] (average 315 MBps) 00:12:31.543 00:12:31.543 14:49:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:31.543 14:49:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:31.543 ************************************ 00:12:31.543 END TEST xnvme_to_malloc_dd_copy 00:12:31.543 ************************************ 00:12:31.543 00:12:31.543 real 0m26.162s 00:12:31.543 user 0m22.961s 00:12:31.543 sys 0m2.650s 00:12:31.543 14:49:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.543 14:49:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:31.543 14:49:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:31.543 14:49:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.543 14:49:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.543 14:49:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.543 ************************************ 00:12:31.543 START TEST xnvme_bdevperf 00:12:31.543 ************************************ 00:12:31.543 14:49:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:31.543 14:49:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:31.543 14:49:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:31.543 14:49:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:31.543 14:49:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:31.543 { 00:12:31.543 "subsystems": [ 00:12:31.543 { 00:12:31.543 "subsystem": "bdev", 00:12:31.543 "config": [ 00:12:31.543 { 00:12:31.543 "params": { 00:12:31.543 "io_mechanism": "libaio", 00:12:31.543 "filename": "/dev/nullb0", 00:12:31.543 "name": "null0" 00:12:31.543 }, 00:12:31.543 "method": "bdev_xnvme_create" 00:12:31.543 }, 00:12:31.543 { 00:12:31.543 "method": "bdev_wait_for_examine" 00:12:31.543 } 00:12:31.543 ] 00:12:31.543 } 00:12:31.543 ] 00:12:31.543 } 00:12:31.804 [2024-11-17 14:49:17.097488] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:31.804 [2024-11-17 14:49:17.097633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68836 ] 00:12:31.804 [2024-11-17 14:49:17.260887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.066 [2024-11-17 14:49:17.382598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.328 Running I/O for 5 seconds... 00:12:34.214 154688.00 IOPS, 604.25 MiB/s [2024-11-17T14:49:20.715Z] 170688.00 IOPS, 666.75 MiB/s [2024-11-17T14:49:22.222Z] 181184.00 IOPS, 707.75 MiB/s [2024-11-17T14:49:22.793Z] 186624.00 IOPS, 729.00 MiB/s [2024-11-17T14:49:22.793Z] 189888.00 IOPS, 741.75 MiB/s 00:12:37.250 Latency(us) 00:12:37.251 [2024-11-17T14:49:22.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.251 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:37.251 null0 : 5.00 189826.58 741.51 0.00 0.00 334.76 308.78 2054.30 00:12:37.251 [2024-11-17T14:49:22.794Z] =================================================================================================================== 00:12:37.251 [2024-11-17T14:49:22.794Z] Total : 189826.58 741.51 0.00 0.00 334.76 308.78 2054.30 00:12:37.823 14:49:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:37.823 14:49:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:37.823 14:49:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:37.823 14:49:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.823 14:49:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:37.823 14:49:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.823 { 00:12:37.823 "subsystems": [ 00:12:37.823 { 00:12:37.823 "subsystem": "bdev", 00:12:37.823 "config": [ 00:12:37.823 { 00:12:37.823 "params": { 00:12:37.823 "io_mechanism": "io_uring", 00:12:37.823 "filename": "/dev/nullb0", 00:12:37.823 "name": "null0" 00:12:37.823 }, 00:12:37.823 "method": "bdev_xnvme_create" 00:12:37.823 }, 00:12:37.823 { 00:12:37.823 "method": "bdev_wait_for_examine" 00:12:37.823 } 00:12:37.823 ] 00:12:37.823 } 00:12:37.823 ] 00:12:37.823 } 00:12:37.823 [2024-11-17 14:49:23.309953] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:37.823 [2024-11-17 14:49:23.310114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68915 ] 00:12:38.084 [2024-11-17 14:49:23.468564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.084 [2024-11-17 14:49:23.550766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.345 Running I/O for 5 seconds... 00:12:40.232 230656.00 IOPS, 901.00 MiB/s [2024-11-17T14:49:27.162Z] 230592.00 IOPS, 900.75 MiB/s [2024-11-17T14:49:28.105Z] 230549.33 IOPS, 900.58 MiB/s [2024-11-17T14:49:29.047Z] 230512.00 IOPS, 900.44 MiB/s 00:12:43.504 Latency(us) 00:12:43.504 [2024-11-17T14:49:29.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.504 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:43.504 null0 : 5.00 230464.39 900.25 0.00 0.00 275.54 145.72 1575.38 00:12:43.504 [2024-11-17T14:49:29.047Z] =================================================================================================================== 00:12:43.504 [2024-11-17T14:49:29.047Z] Total : 230464.39 900.25 0.00 0.00 275.54 145.72 1575.38 00:12:43.765 14:49:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:43.765 14:49:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:44.028 00:12:44.028 real 0m12.322s 00:12:44.028 user 0m9.931s 00:12:44.028 sys 0m2.164s 00:12:44.028 14:49:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.028 ************************************ 00:12:44.028 END TEST xnvme_bdevperf 00:12:44.028 ************************************ 00:12:44.028 14:49:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:44.028 00:12:44.028 real 0m38.771s 00:12:44.028 user 0m33.002s 00:12:44.028 sys 0m4.941s 00:12:44.028 ************************************ 00:12:44.028 END TEST nvme_xnvme 00:12:44.028 ************************************ 00:12:44.028 14:49:29 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.028 14:49:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.028 14:49:29 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:44.028 14:49:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:44.028 14:49:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.028 14:49:29 -- common/autotest_common.sh@10 -- # set +x 00:12:44.028 ************************************ 00:12:44.028 START TEST blockdev_xnvme 00:12:44.028 ************************************ 00:12:44.028 14:49:29 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:44.028 * Looking for test storage... 00:12:44.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:44.028 14:49:29 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:44.028 14:49:29 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:44.028 14:49:29 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:44.028 14:49:29 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.028 14:49:29 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:44.028 14:49:29 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.028 14:49:29 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:44.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.028 --rc genhtml_branch_coverage=1 00:12:44.028 --rc genhtml_function_coverage=1 00:12:44.028 --rc genhtml_legend=1 00:12:44.028 --rc geninfo_all_blocks=1 00:12:44.028 --rc geninfo_unexecuted_blocks=1 00:12:44.028 00:12:44.028 ' 00:12:44.028 14:49:29 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:44.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.028 --rc genhtml_branch_coverage=1 00:12:44.028 --rc genhtml_function_coverage=1 00:12:44.028 --rc genhtml_legend=1 00:12:44.028 --rc geninfo_all_blocks=1 00:12:44.028 --rc geninfo_unexecuted_blocks=1 00:12:44.028 00:12:44.028 ' 00:12:44.028 14:49:29 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:44.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.028 --rc genhtml_branch_coverage=1 00:12:44.028 --rc genhtml_function_coverage=1 00:12:44.028 --rc genhtml_legend=1 00:12:44.028 --rc geninfo_all_blocks=1 00:12:44.028 --rc geninfo_unexecuted_blocks=1 00:12:44.028 00:12:44.028 ' 00:12:44.028 14:49:29 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:44.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.028 --rc genhtml_branch_coverage=1 00:12:44.028 --rc genhtml_function_coverage=1 00:12:44.028 --rc genhtml_legend=1 00:12:44.028 --rc geninfo_all_blocks=1 00:12:44.028 --rc geninfo_unexecuted_blocks=1 00:12:44.028 00:12:44.028 ' 00:12:44.028 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:44.028 14:49:29 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:44.028 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:44.028 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:44.028 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:44.028 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:44.028 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:44.028 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:44.028 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:44.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69058 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69058 00:12:44.290 14:49:29 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 69058 ']' 00:12:44.290 14:49:29 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.290 14:49:29 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.290 14:49:29 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.290 14:49:29 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.290 14:49:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.290 14:49:29 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:44.290 [2024-11-17 14:49:29.653889] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:44.290 [2024-11-17 14:49:29.654032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69058 ] 00:12:44.290 [2024-11-17 14:49:29.811646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.551 [2024-11-17 14:49:29.926024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.124 14:49:30 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.124 14:49:30 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:12:45.124 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:45.124 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:45.124 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:45.124 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:45.124 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:45.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:45.646 Waiting for block devices as requested 00:12:45.646 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.646 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.907 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.907 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:51.195 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:51.195 nvme0n1 00:12:51.195 nvme1n1 00:12:51.195 nvme2n1 00:12:51.195 nvme2n2 00:12:51.195 nvme2n3 00:12:51.195 nvme3n1 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:51.195 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:51.195 14:49:36 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.196 14:49:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.196 14:49:36 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.196 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:51.196 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "196e9406-7ab6-4446-8048-7632ade9646b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "196e9406-7ab6-4446-8048-7632ade9646b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "28490dd5-5d7a-4989-8f07-e47acad6855d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "28490dd5-5d7a-4989-8f07-e47acad6855d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "fb27ea8b-5803-4591-a042-57faad260064"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fb27ea8b-5803-4591-a042-57faad260064",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "d125ce6f-45d1-4ef1-89a7-f1eaa147dfdd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d125ce6f-45d1-4ef1-89a7-f1eaa147dfdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "6d4c7b5c-aff8-4caf-872f-740c2a7f201c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6d4c7b5c-aff8-4caf-872f-740c2a7f201c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "43a81752-a906-4e8d-95fe-896728e85358"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "43a81752-a906-4e8d-95fe-896728e85358",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:51.196 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:51.196 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:51.196 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:51.196 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:51.196 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69058 00:12:51.196 14:49:36 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 69058 ']' 00:12:51.196 14:49:36 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 69058 00:12:51.196 14:49:36 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:12:51.196 14:49:36 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.196 14:49:36 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69058 00:12:51.196 killing process with pid 69058 00:12:51.196 14:49:36 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.196 14:49:36 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.196 14:49:36 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69058' 00:12:51.196 14:49:36 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 69058 00:12:51.196 14:49:36 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 69058 00:12:52.581 14:49:37 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:52.581 14:49:37 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:52.581 14:49:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:52.581 14:49:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.581 14:49:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.581 ************************************ 00:12:52.581 START TEST bdev_hello_world 00:12:52.581 ************************************ 00:12:52.581 14:49:37 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:52.581 [2024-11-17 14:49:37.869037] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:52.581 [2024-11-17 14:49:37.869148] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69412 ] 00:12:52.581 [2024-11-17 14:49:38.025778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.581 [2024-11-17 14:49:38.101956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.153 [2024-11-17 14:49:38.384106] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:53.153 [2024-11-17 14:49:38.384143] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:53.153 [2024-11-17 14:49:38.384155] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:53.153 [2024-11-17 14:49:38.385610] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:53.153 [2024-11-17 14:49:38.385837] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:53.153 [2024-11-17 14:49:38.385853] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:53.153 [2024-11-17 14:49:38.386180] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:53.153 00:12:53.153 [2024-11-17 14:49:38.386197] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:53.413 00:12:53.414 real 0m1.132s 00:12:53.414 user 0m0.865s 00:12:53.414 sys 0m0.156s 00:12:53.414 ************************************ 00:12:53.414 END TEST bdev_hello_world 00:12:53.414 ************************************ 00:12:53.414 14:49:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.414 14:49:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:53.675 14:49:38 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:53.675 14:49:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:53.675 14:49:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.675 14:49:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.675 ************************************ 00:12:53.675 START TEST bdev_bounds 00:12:53.675 ************************************ 00:12:53.675 Process bdevio pid: 69443 00:12:53.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.675 14:49:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:53.675 14:49:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69443 00:12:53.675 14:49:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:53.675 14:49:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69443' 00:12:53.675 14:49:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69443 00:12:53.675 14:49:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 69443 ']' 00:12:53.675 14:49:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:53.675 14:49:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.675 14:49:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.675 14:49:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.675 14:49:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.675 14:49:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:53.675 [2024-11-17 14:49:39.057942] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:53.675 [2024-11-17 14:49:39.058055] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69443 ] 00:12:53.676 [2024-11-17 14:49:39.214082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.936 [2024-11-17 14:49:39.295548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.936 [2024-11-17 14:49:39.295802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.936 [2024-11-17 14:49:39.295835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.508 14:49:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.508 14:49:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:54.508 14:49:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:54.508 I/O targets: 00:12:54.508 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:54.508 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:54.508 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.508 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.508 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.508 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:54.508 00:12:54.508 00:12:54.508 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.508 http://cunit.sourceforge.net/ 00:12:54.508 00:12:54.508 00:12:54.508 Suite: bdevio tests on: nvme3n1 00:12:54.508 Test: blockdev write read block ...passed 00:12:54.508 Test: blockdev write zeroes read block ...passed 00:12:54.508 Test: blockdev write zeroes read no split ...passed 00:12:54.508 Test: blockdev write zeroes read split ...passed 00:12:54.508 Test: blockdev write zeroes read split partial ...passed 00:12:54.508 Test: blockdev reset ...passed 00:12:54.508 Test: blockdev write read 8 blocks ...passed 00:12:54.508 Test: blockdev write read size > 128k ...passed 00:12:54.508 Test: blockdev write read invalid size ...passed 00:12:54.508 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.508 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.508 Test: blockdev write read max offset ...passed 00:12:54.508 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.508 Test: blockdev writev readv 8 blocks ...passed 00:12:54.508 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.508 Test: blockdev writev readv block ...passed 00:12:54.509 Test: blockdev writev readv size > 128k ...passed 00:12:54.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.509 Test: blockdev comparev and writev ...passed 00:12:54.509 Test: blockdev nvme passthru rw ...passed 00:12:54.509 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.509 Test: blockdev nvme admin passthru ...passed 00:12:54.509 Test: blockdev copy ...passed 00:12:54.509 Suite: bdevio tests on: nvme2n3 00:12:54.509 Test: blockdev write read block ...passed 00:12:54.509 Test: blockdev write zeroes read block ...passed 00:12:54.509 Test: blockdev write zeroes read no split ...passed 00:12:54.509 Test: blockdev write zeroes read split ...passed 00:12:54.770 Test: blockdev write zeroes read split partial ...passed 00:12:54.770 Test: blockdev reset ...passed 00:12:54.770 Test: blockdev write read 8 blocks ...passed 00:12:54.770 Test: blockdev write read size > 128k ...passed 00:12:54.770 Test: blockdev write read invalid size ...passed 00:12:54.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.770 Test: blockdev write read max offset ...passed 00:12:54.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.770 Test: blockdev writev readv 8 blocks ...passed 00:12:54.770 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.770 Test: blockdev writev readv block ...passed 00:12:54.770 Test: blockdev writev readv size > 128k ...passed 00:12:54.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.770 Test: blockdev comparev and writev ...passed 00:12:54.770 Test: blockdev nvme passthru rw ...passed 00:12:54.770 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.770 Test: blockdev nvme admin passthru ...passed 00:12:54.770 Test: blockdev copy ...passed 00:12:54.770 Suite: bdevio tests on: nvme2n2 00:12:54.770 Test: blockdev write read block ...passed 00:12:54.770 Test: blockdev write zeroes read block ...passed 00:12:54.770 Test: blockdev write zeroes read no split ...passed 00:12:54.770 Test: blockdev write zeroes read split ...passed 00:12:54.770 Test: blockdev write zeroes read split partial ...passed 00:12:54.770 Test: blockdev reset ...passed 00:12:54.770 Test: blockdev write read 8 blocks ...passed 00:12:54.770 Test: blockdev write read size > 128k ...passed 00:12:54.770 Test: blockdev write read invalid size ...passed 00:12:54.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.770 Test: blockdev write read max offset ...passed 00:12:54.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.770 Test: blockdev writev readv 8 blocks ...passed 00:12:54.770 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.770 Test: blockdev writev readv block ...passed 00:12:54.770 Test: blockdev writev readv size > 128k ...passed 00:12:54.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.770 Test: blockdev comparev and writev ...passed 00:12:54.771 Test: blockdev nvme passthru rw ...passed 00:12:54.771 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.771 Test: blockdev nvme admin passthru ...passed 00:12:54.771 Test: blockdev copy ...passed 00:12:54.771 Suite: bdevio tests on: nvme2n1 00:12:54.771 Test: blockdev write read block ...passed 00:12:54.771 Test: blockdev write zeroes read block ...passed 00:12:54.771 Test: blockdev write zeroes read no split ...passed 00:12:54.771 Test: blockdev write zeroes read split ...passed 00:12:54.771 Test: blockdev write zeroes read split partial ...passed 00:12:54.771 Test: blockdev reset ...passed 00:12:54.771 Test: blockdev write read 8 blocks ...passed 00:12:54.771 Test: blockdev write read size > 128k ...passed 00:12:54.771 Test: blockdev write read invalid size ...passed 00:12:54.771 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.771 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.771 Test: blockdev write read max offset ...passed 00:12:54.771 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.771 Test: blockdev writev readv 8 blocks ...passed 00:12:54.771 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.771 Test: blockdev writev readv block ...passed 00:12:54.771 Test: blockdev writev readv size > 128k ...passed 00:12:54.771 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.771 Test: blockdev comparev and writev ...passed 00:12:54.771 Test: blockdev nvme passthru rw ...passed 00:12:54.771 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.771 Test: blockdev nvme admin passthru ...passed 00:12:54.771 Test: blockdev copy ...passed 00:12:54.771 Suite: bdevio tests on: nvme1n1 00:12:54.771 Test: blockdev write read block ...passed 00:12:54.771 Test: blockdev write zeroes read block ...passed 00:12:54.771 Test: blockdev write zeroes read no split ...passed 00:12:54.771 Test: blockdev write zeroes read split ...passed 00:12:54.771 Test: blockdev write zeroes read split partial ...passed 00:12:54.771 Test: blockdev reset ...passed 00:12:54.771 Test: blockdev write read 8 blocks ...passed 00:12:54.771 Test: blockdev write read size > 128k ...passed 00:12:54.771 Test: blockdev write read invalid size ...passed 00:12:54.771 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.771 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.771 Test: blockdev write read max offset ...passed 00:12:54.771 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.771 Test: blockdev writev readv 8 blocks ...passed 00:12:54.771 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.771 Test: blockdev writev readv block ...passed 00:12:54.771 Test: blockdev writev readv size > 128k ...passed 00:12:54.771 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.771 Test: blockdev comparev and writev ...passed 00:12:54.771 Test: blockdev nvme passthru rw ...passed 00:12:54.771 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.771 Test: blockdev nvme admin passthru ...passed 00:12:54.771 Test: blockdev copy ...passed 00:12:54.771 Suite: bdevio tests on: nvme0n1 00:12:54.771 Test: blockdev write read block ...passed 00:12:54.771 Test: blockdev write zeroes read block ...passed 00:12:54.771 Test: blockdev write zeroes read no split ...passed 00:12:54.771 Test: blockdev write zeroes read split ...passed 00:12:54.771 Test: blockdev write zeroes read split partial ...passed 00:12:54.771 Test: blockdev reset ...passed 00:12:54.771 Test: blockdev write read 8 blocks ...passed 00:12:54.771 Test: blockdev write read size > 128k ...passed 00:12:54.771 Test: blockdev write read invalid size ...passed 00:12:54.771 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.771 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.771 Test: blockdev write read max offset ...passed 00:12:54.771 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.771 Test: blockdev writev readv 8 blocks ...passed 00:12:54.771 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.771 Test: blockdev writev readv block ...passed 00:12:54.771 Test: blockdev writev readv size > 128k ...passed 00:12:54.771 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.771 Test: blockdev comparev and writev ...passed 00:12:54.771 Test: blockdev nvme passthru rw ...passed 00:12:54.771 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.771 Test: blockdev nvme admin passthru ...passed 00:12:54.771 Test: blockdev copy ...passed 00:12:54.771 00:12:54.771 Run Summary: Type Total Ran Passed Failed Inactive 00:12:54.771 suites 6 6 n/a 0 0 00:12:54.771 tests 138 138 138 0 0 00:12:54.771 asserts 780 780 780 0 n/a 00:12:54.771 00:12:54.771 Elapsed time = 0.837 seconds 00:12:54.771 0 00:12:54.771 14:49:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69443 00:12:54.771 14:49:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 69443 ']' 00:12:54.771 14:49:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 69443 00:12:54.771 14:49:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:54.771 14:49:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.771 14:49:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69443 00:12:55.032 killing process with pid 69443 00:12:55.032 14:49:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.032 14:49:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.032 14:49:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69443' 00:12:55.032 14:49:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 69443 00:12:55.032 14:49:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 69443 00:12:55.605 14:49:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:55.605 00:12:55.605 real 0m1.887s 00:12:55.605 user 0m4.779s 00:12:55.605 sys 0m0.253s 00:12:55.605 ************************************ 00:12:55.605 END TEST bdev_bounds 00:12:55.605 ************************************ 00:12:55.605 14:49:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.605 14:49:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:55.605 14:49:40 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:55.605 14:49:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:55.605 14:49:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.605 14:49:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.605 ************************************ 00:12:55.605 START TEST bdev_nbd 00:12:55.605 ************************************ 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:55.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69498 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69498 /var/tmp/spdk-nbd.sock 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 69498 ']' 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.605 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:55.605 [2024-11-17 14:49:41.009879] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:55.605 [2024-11-17 14:49:41.010271] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.866 [2024-11-17 14:49:41.162690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.866 [2024-11-17 14:49:41.238700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.504 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.504 1+0 records in 00:12:56.504 1+0 records out 00:12:56.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335139 s, 12.2 MB/s 00:12:56.504 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.790 1+0 records in 00:12:56.790 1+0 records out 00:12:56.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00089686 s, 4.6 MB/s 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:56.790 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:56.791 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.791 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.052 1+0 records in 00:12:57.052 1+0 records out 00:12:57.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731844 s, 5.6 MB/s 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.052 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.314 1+0 records in 00:12:57.314 1+0 records out 00:12:57.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112077 s, 3.7 MB/s 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.314 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.576 1+0 records in 00:12:57.576 1+0 records out 00:12:57.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000868144 s, 4.7 MB/s 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.576 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:57.837 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:57.837 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:57.837 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:57.837 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:57.837 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:57.837 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.837 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.837 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:57.837 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:57.837 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.837 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.838 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.838 1+0 records in 00:12:57.838 1+0 records out 00:12:57.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00098885 s, 4.1 MB/s 00:12:57.838 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.838 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:57.838 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.838 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.838 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:57.838 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.838 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.838 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:58.099 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:58.099 { 00:12:58.099 "nbd_device": "/dev/nbd0", 00:12:58.099 "bdev_name": "nvme0n1" 00:12:58.099 }, 00:12:58.099 { 00:12:58.099 "nbd_device": "/dev/nbd1", 00:12:58.099 "bdev_name": "nvme1n1" 00:12:58.099 }, 00:12:58.099 { 00:12:58.099 "nbd_device": "/dev/nbd2", 00:12:58.099 "bdev_name": "nvme2n1" 00:12:58.099 }, 00:12:58.099 { 00:12:58.099 "nbd_device": "/dev/nbd3", 00:12:58.099 "bdev_name": "nvme2n2" 00:12:58.099 }, 00:12:58.099 { 00:12:58.099 "nbd_device": "/dev/nbd4", 00:12:58.099 "bdev_name": "nvme2n3" 00:12:58.099 }, 00:12:58.099 { 00:12:58.099 "nbd_device": "/dev/nbd5", 00:12:58.100 "bdev_name": "nvme3n1" 00:12:58.100 } 00:12:58.100 ]' 00:12:58.100 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:58.100 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:58.100 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:58.100 { 00:12:58.100 "nbd_device": "/dev/nbd0", 00:12:58.100 "bdev_name": "nvme0n1" 00:12:58.100 }, 00:12:58.100 { 00:12:58.100 "nbd_device": "/dev/nbd1", 00:12:58.100 "bdev_name": "nvme1n1" 00:12:58.100 }, 00:12:58.100 { 00:12:58.100 "nbd_device": "/dev/nbd2", 00:12:58.100 "bdev_name": "nvme2n1" 00:12:58.100 }, 00:12:58.100 { 00:12:58.100 "nbd_device": "/dev/nbd3", 00:12:58.100 "bdev_name": "nvme2n2" 00:12:58.100 }, 00:12:58.100 { 00:12:58.100 "nbd_device": "/dev/nbd4", 00:12:58.100 "bdev_name": "nvme2n3" 00:12:58.100 }, 00:12:58.100 { 00:12:58.100 "nbd_device": "/dev/nbd5", 00:12:58.100 "bdev_name": "nvme3n1" 00:12:58.100 } 00:12:58.100 ]' 00:12:58.100 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:58.100 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.100 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:58.100 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.100 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:58.100 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.100 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.361 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:58.623 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:58.623 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:58.623 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:58.623 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.623 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.623 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:58.623 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.623 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.623 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.623 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:58.884 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:58.884 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:58.884 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:58.884 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.884 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.884 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:58.884 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.884 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.884 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.884 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:59.145 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:59.145 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:59.145 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:59.145 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.145 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.145 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:59.145 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.145 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.145 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.145 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:59.145 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:59.407 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:59.669 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:59.669 /dev/nbd0 00:12:59.669 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:59.669 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:59.669 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:59.669 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:59.669 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:59.669 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:59.669 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:59.669 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:59.669 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:59.669 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:59.669 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.669 1+0 records in 00:12:59.669 1+0 records out 00:12:59.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100919 s, 4.1 MB/s 00:12:59.670 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.670 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:59.670 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.670 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:59.670 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:59.670 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.670 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:59.670 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:59.931 /dev/nbd1 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.931 1+0 records in 00:12:59.931 1+0 records out 00:12:59.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000969579 s, 4.2 MB/s 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:59.931 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:00.193 /dev/nbd10 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.193 1+0 records in 00:13:00.193 1+0 records out 00:13:00.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103837 s, 3.9 MB/s 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.193 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:00.455 /dev/nbd11 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.455 1+0 records in 00:13:00.455 1+0 records out 00:13:00.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101699 s, 4.0 MB/s 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.455 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:00.717 /dev/nbd12 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.717 1+0 records in 00:13:00.717 1+0 records out 00:13:00.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079937 s, 5.1 MB/s 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.717 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:00.978 /dev/nbd13 00:13:00.978 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:00.978 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:00.978 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:00.978 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:00.978 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.979 1+0 records in 00:13:00.979 1+0 records out 00:13:00.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00177805 s, 2.3 MB/s 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.979 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:01.240 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:01.240 { 00:13:01.240 "nbd_device": "/dev/nbd0", 00:13:01.240 "bdev_name": "nvme0n1" 00:13:01.241 }, 00:13:01.241 { 00:13:01.241 "nbd_device": "/dev/nbd1", 00:13:01.241 "bdev_name": "nvme1n1" 00:13:01.241 }, 00:13:01.241 { 00:13:01.241 "nbd_device": "/dev/nbd10", 00:13:01.241 "bdev_name": "nvme2n1" 00:13:01.241 }, 00:13:01.241 { 00:13:01.241 "nbd_device": "/dev/nbd11", 00:13:01.241 "bdev_name": "nvme2n2" 00:13:01.241 }, 00:13:01.241 { 00:13:01.241 "nbd_device": "/dev/nbd12", 00:13:01.241 "bdev_name": "nvme2n3" 00:13:01.241 }, 00:13:01.241 { 00:13:01.241 "nbd_device": "/dev/nbd13", 00:13:01.241 "bdev_name": "nvme3n1" 00:13:01.241 } 00:13:01.241 ]' 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:01.241 { 00:13:01.241 "nbd_device": "/dev/nbd0", 00:13:01.241 "bdev_name": "nvme0n1" 00:13:01.241 }, 00:13:01.241 { 00:13:01.241 "nbd_device": "/dev/nbd1", 00:13:01.241 "bdev_name": "nvme1n1" 00:13:01.241 }, 00:13:01.241 { 00:13:01.241 "nbd_device": "/dev/nbd10", 00:13:01.241 "bdev_name": "nvme2n1" 00:13:01.241 }, 00:13:01.241 { 00:13:01.241 "nbd_device": "/dev/nbd11", 00:13:01.241 "bdev_name": "nvme2n2" 00:13:01.241 }, 00:13:01.241 { 00:13:01.241 "nbd_device": "/dev/nbd12", 00:13:01.241 "bdev_name": "nvme2n3" 00:13:01.241 }, 00:13:01.241 { 00:13:01.241 "nbd_device": "/dev/nbd13", 00:13:01.241 "bdev_name": "nvme3n1" 00:13:01.241 } 00:13:01.241 ]' 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:01.241 /dev/nbd1 00:13:01.241 /dev/nbd10 00:13:01.241 /dev/nbd11 00:13:01.241 /dev/nbd12 00:13:01.241 /dev/nbd13' 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:01.241 /dev/nbd1 00:13:01.241 /dev/nbd10 00:13:01.241 /dev/nbd11 00:13:01.241 /dev/nbd12 00:13:01.241 /dev/nbd13' 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:01.241 256+0 records in 00:13:01.241 256+0 records out 00:13:01.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501759 s, 209 MB/s 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.241 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:01.502 256+0 records in 00:13:01.502 256+0 records out 00:13:01.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.21809 s, 4.8 MB/s 00:13:01.502 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.502 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:01.762 256+0 records in 00:13:01.762 256+0 records out 00:13:01.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.300957 s, 3.5 MB/s 00:13:01.762 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.762 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:02.023 256+0 records in 00:13:02.023 256+0 records out 00:13:02.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203513 s, 5.2 MB/s 00:13:02.023 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.023 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:02.285 256+0 records in 00:13:02.285 256+0 records out 00:13:02.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.206916 s, 5.1 MB/s 00:13:02.285 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.285 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:02.547 256+0 records in 00:13:02.547 256+0 records out 00:13:02.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218786 s, 4.8 MB/s 00:13:02.547 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.547 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:02.547 256+0 records in 00:13:02.547 256+0 records out 00:13:02.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137242 s, 7.6 MB/s 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:02.547 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.809 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:03.070 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:03.070 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:03.070 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:03.070 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.070 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.070 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:03.070 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.070 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.070 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.070 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:03.332 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:03.332 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:03.332 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:03.332 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.332 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.332 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:03.332 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.332 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.332 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.332 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:03.594 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:03.594 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:03.594 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:03.594 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.594 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.594 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:03.594 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.594 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.594 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.594 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:03.854 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:03.854 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:03.854 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:03.854 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.854 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.854 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:03.854 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.854 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.854 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.854 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:04.114 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:04.375 malloc_lvol_verify 00:13:04.375 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:04.635 95ffece5-3940-4aee-8430-839fc8cf16d7 00:13:04.635 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:04.896 b8d35cb0-580e-4eb0-9944-48086b4986cd 00:13:04.896 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:05.157 /dev/nbd0 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:05.157 mke2fs 1.47.0 (5-Feb-2023) 00:13:05.157 Discarding device blocks: 0/4096 done 00:13:05.157 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:05.157 00:13:05.157 Allocating group tables: 0/1 done 00:13:05.157 Writing inode tables: 0/1 done 00:13:05.157 Creating journal (1024 blocks): done 00:13:05.157 Writing superblocks and filesystem accounting information: 0/1 done 00:13:05.157 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:05.157 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69498 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 69498 ']' 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 69498 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69498 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.419 killing process with pid 69498 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69498' 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 69498 00:13:05.419 14:49:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 69498 00:13:06.364 14:49:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:06.364 00:13:06.364 real 0m10.608s 00:13:06.364 user 0m14.262s 00:13:06.364 sys 0m3.620s 00:13:06.364 ************************************ 00:13:06.364 END TEST bdev_nbd 00:13:06.364 ************************************ 00:13:06.364 14:49:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.364 14:49:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:06.364 14:49:51 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:06.364 14:49:51 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:06.364 14:49:51 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:06.364 14:49:51 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:06.364 14:49:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.364 14:49:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.364 14:49:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.364 ************************************ 00:13:06.364 START TEST bdev_fio 00:13:06.364 ************************************ 00:13:06.364 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:13:06.364 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:06.364 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:06.364 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:06.364 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:06.365 ************************************ 00:13:06.365 START TEST bdev_fio_rw_verify 00:13:06.365 ************************************ 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:06.365 14:49:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:06.365 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:06.365 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:06.365 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:06.365 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:06.365 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:06.365 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:06.365 fio-3.35 00:13:06.365 Starting 6 threads 00:13:18.606 00:13:18.606 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69905: Sun Nov 17 14:50:02 2024 00:13:18.606 read: IOPS=14.0k, BW=54.8MiB/s (57.5MB/s)(549MiB/10002msec) 00:13:18.606 slat (usec): min=2, max=2482, avg= 6.28, stdev=14.63 00:13:18.606 clat (usec): min=76, max=9265, avg=1424.01, stdev=863.37 00:13:18.606 lat (usec): min=79, max=9275, avg=1430.29, stdev=864.03 00:13:18.606 clat percentiles (usec): 00:13:18.606 | 50.000th=[ 1319], 99.000th=[ 3982], 99.900th=[ 5407], 99.990th=[ 7439], 00:13:18.606 | 99.999th=[ 7701] 00:13:18.606 write: IOPS=14.4k, BW=56.4MiB/s (59.1MB/s)(564MiB/10002msec); 0 zone resets 00:13:18.606 slat (usec): min=12, max=4931, avg=41.18, stdev=145.91 00:13:18.606 clat (usec): min=77, max=10481, avg=1611.03, stdev=926.99 00:13:18.606 lat (usec): min=90, max=10511, avg=1652.21, stdev=941.28 00:13:18.606 clat percentiles (usec): 00:13:18.606 | 50.000th=[ 1483], 99.000th=[ 4424], 99.900th=[ 5866], 99.990th=[ 8029], 00:13:18.606 | 99.999th=[10421] 00:13:18.606 bw ( KiB/s): min=45679, max=106371, per=100.00%, avg=58292.05, stdev=2757.22, samples=114 00:13:18.606 iops : min=11419, max=26592, avg=14571.89, stdev=689.36, samples=114 00:13:18.606 lat (usec) : 100=0.01%, 250=3.31%, 500=8.80%, 750=9.12%, 1000=9.68% 00:13:18.606 lat (msec) : 2=43.45%, 4=24.20%, 10=1.42%, 20=0.01% 00:13:18.606 cpu : usr=45.71%, sys=30.51%, ctx=5419, majf=0, minf=14473 00:13:18.606 IO depths : 1=11.6%, 2=24.0%, 4=51.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:18.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.606 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.606 issued rwts: total=140430,144424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.606 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:18.606 00:13:18.606 Run status group 0 (all jobs): 00:13:18.606 READ: bw=54.8MiB/s (57.5MB/s), 54.8MiB/s-54.8MiB/s (57.5MB/s-57.5MB/s), io=549MiB (575MB), run=10002-10002msec 00:13:18.606 WRITE: bw=56.4MiB/s (59.1MB/s), 56.4MiB/s-56.4MiB/s (59.1MB/s-59.1MB/s), io=564MiB (592MB), run=10002-10002msec 00:13:18.606 ----------------------------------------------------- 00:13:18.606 Suppressions used: 00:13:18.606 count bytes template 00:13:18.606 6 48 /usr/src/fio/parse.c 00:13:18.606 3899 374304 /usr/src/fio/iolog.c 00:13:18.606 1 8 libtcmalloc_minimal.so 00:13:18.606 1 904 libcrypto.so 00:13:18.606 ----------------------------------------------------- 00:13:18.606 00:13:18.606 00:13:18.606 real 0m12.067s 00:13:18.606 user 0m28.966s 00:13:18.606 sys 0m18.667s 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.607 ************************************ 00:13:18.607 END TEST bdev_fio_rw_verify 00:13:18.607 ************************************ 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "196e9406-7ab6-4446-8048-7632ade9646b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "196e9406-7ab6-4446-8048-7632ade9646b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "28490dd5-5d7a-4989-8f07-e47acad6855d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "28490dd5-5d7a-4989-8f07-e47acad6855d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "fb27ea8b-5803-4591-a042-57faad260064"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fb27ea8b-5803-4591-a042-57faad260064",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "d125ce6f-45d1-4ef1-89a7-f1eaa147dfdd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d125ce6f-45d1-4ef1-89a7-f1eaa147dfdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "6d4c7b5c-aff8-4caf-872f-740c2a7f201c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6d4c7b5c-aff8-4caf-872f-740c2a7f201c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "43a81752-a906-4e8d-95fe-896728e85358"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "43a81752-a906-4e8d-95fe-896728e85358",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.607 /home/vagrant/spdk_repo/spdk 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:18.607 00:13:18.607 real 0m12.235s 00:13:18.607 user 0m29.036s 00:13:18.607 sys 0m18.746s 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.607 ************************************ 00:13:18.607 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:18.607 END TEST bdev_fio 00:13:18.607 ************************************ 00:13:18.607 14:50:03 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:18.607 14:50:03 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:18.607 14:50:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:18.607 14:50:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.607 14:50:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.607 ************************************ 00:13:18.607 START TEST bdev_verify 00:13:18.607 ************************************ 00:13:18.607 14:50:03 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:18.607 [2024-11-17 14:50:03.992663] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:18.607 [2024-11-17 14:50:03.992810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70074 ] 00:13:18.869 [2024-11-17 14:50:04.158689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:18.869 [2024-11-17 14:50:04.284158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.869 [2024-11-17 14:50:04.284258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.442 Running I/O for 5 seconds... 00:13:21.404 21632.00 IOPS, 84.50 MiB/s [2024-11-17T14:50:07.892Z] 21520.00 IOPS, 84.06 MiB/s [2024-11-17T14:50:09.281Z] 21930.67 IOPS, 85.67 MiB/s [2024-11-17T14:50:09.854Z] 22944.00 IOPS, 89.62 MiB/s [2024-11-17T14:50:09.854Z] 23095.60 IOPS, 90.22 MiB/s 00:13:24.311 Latency(us) 00:13:24.311 [2024-11-17T14:50:09.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.311 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.311 Verification LBA range: start 0x0 length 0xa0000 00:13:24.311 nvme0n1 : 5.02 1810.56 7.07 0.00 0.00 70524.82 9477.51 74206.92 00:13:24.311 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.311 Verification LBA range: start 0xa0000 length 0xa0000 00:13:24.311 nvme0n1 : 5.03 1781.50 6.96 0.00 0.00 71713.09 4486.70 68964.04 00:13:24.311 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.311 Verification LBA range: start 0x0 length 0xbd0bd 00:13:24.311 nvme1n1 : 5.03 2367.91 9.25 0.00 0.00 53819.05 6402.36 64527.75 00:13:24.311 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.311 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:24.311 nvme1n1 : 5.05 2259.31 8.83 0.00 0.00 56316.09 5091.64 61301.37 00:13:24.311 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.311 Verification LBA range: start 0x0 length 0x80000 00:13:24.311 nvme2n1 : 5.05 1850.27 7.23 0.00 0.00 68553.42 8771.74 73803.62 00:13:24.311 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.311 Verification LBA range: start 0x80000 length 0x80000 00:13:24.311 nvme2n1 : 5.05 1849.03 7.22 0.00 0.00 68852.81 8418.86 77433.30 00:13:24.311 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.311 Verification LBA range: start 0x0 length 0x80000 00:13:24.311 nvme2n2 : 5.07 1842.28 7.20 0.00 0.00 68678.71 7662.67 62511.26 00:13:24.311 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.311 Verification LBA range: start 0x80000 length 0x80000 00:13:24.311 nvme2n2 : 5.06 1797.81 7.02 0.00 0.00 70559.51 9477.51 65737.65 00:13:24.311 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.311 Verification LBA range: start 0x0 length 0x80000 00:13:24.311 nvme2n3 : 5.08 1840.75 7.19 0.00 0.00 68595.25 4889.99 64931.05 00:13:24.311 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.311 Verification LBA range: start 0x80000 length 0x80000 00:13:24.311 nvme2n3 : 5.07 1792.80 7.00 0.00 0.00 70580.34 9527.93 70173.93 00:13:24.311 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.311 Verification LBA range: start 0x0 length 0x20000 00:13:24.311 nvme3n1 : 5.08 1839.36 7.19 0.00 0.00 68565.37 4763.96 72593.72 00:13:24.311 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.311 Verification LBA range: start 0x20000 length 0x20000 00:13:24.311 nvme3n1 : 5.07 1816.31 7.09 0.00 0.00 69529.97 3075.15 63721.16 00:13:24.311 [2024-11-17T14:50:09.854Z] =================================================================================================================== 00:13:24.311 [2024-11-17T14:50:09.854Z] Total : 22847.89 89.25 0.00 0.00 66659.33 3075.15 77433.30 00:13:25.253 00:13:25.253 real 0m6.715s 00:13:25.253 user 0m10.934s 00:13:25.253 sys 0m1.365s 00:13:25.253 ************************************ 00:13:25.253 END TEST bdev_verify 00:13:25.253 ************************************ 00:13:25.253 14:50:10 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.253 14:50:10 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:25.253 14:50:10 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:25.253 14:50:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:25.253 14:50:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.253 14:50:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.253 ************************************ 00:13:25.253 START TEST bdev_verify_big_io 00:13:25.253 ************************************ 00:13:25.253 14:50:10 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:25.253 [2024-11-17 14:50:10.785659] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:25.253 [2024-11-17 14:50:10.785802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70173 ] 00:13:25.515 [2024-11-17 14:50:10.951567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:25.776 [2024-11-17 14:50:11.072124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.776 [2024-11-17 14:50:11.072224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.346 Running I/O for 5 seconds... 00:13:32.205 1304.00 IOPS, 81.50 MiB/s [2024-11-17T14:50:17.748Z] 2432.00 IOPS, 152.00 MiB/s 00:13:32.205 Latency(us) 00:13:32.205 [2024-11-17T14:50:17.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.206 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.206 Verification LBA range: start 0x0 length 0xa000 00:13:32.206 nvme0n1 : 5.97 72.37 4.52 0.00 0.00 1714046.47 166965.56 2168132.53 00:13:32.206 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.206 Verification LBA range: start 0xa000 length 0xa000 00:13:32.206 nvme0n1 : 5.78 130.21 8.14 0.00 0.00 961529.03 6956.90 1129235.69 00:13:32.206 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.206 Verification LBA range: start 0x0 length 0xbd0b 00:13:32.206 nvme1n1 : 5.94 140.11 8.76 0.00 0.00 855490.98 25609.45 1032444.06 00:13:32.206 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.206 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:32.206 nvme1n1 : 5.88 117.05 7.32 0.00 0.00 1025422.10 45976.02 2026171.47 00:13:32.206 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.206 Verification LBA range: start 0x0 length 0x8000 00:13:32.206 nvme2n1 : 5.90 69.61 4.35 0.00 0.00 1675462.74 113730.17 3820043.03 00:13:32.206 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.206 Verification LBA range: start 0x8000 length 0x8000 00:13:32.206 nvme2n1 : 5.87 130.89 8.18 0.00 0.00 894433.54 137121.48 903388.55 00:13:32.206 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.206 Verification LBA range: start 0x0 length 0x8000 00:13:32.206 nvme2n2 : 5.94 61.94 3.87 0.00 0.00 1811787.65 146800.64 2942465.58 00:13:32.206 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.206 Verification LBA range: start 0x8000 length 0x8000 00:13:32.206 nvme2n2 : 5.88 163.24 10.20 0.00 0.00 705511.84 97194.93 884030.23 00:13:32.206 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.206 Verification LBA range: start 0x0 length 0x8000 00:13:32.206 nvme2n3 : 5.97 109.93 6.87 0.00 0.00 996427.08 35288.62 1522854.99 00:13:32.206 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.206 Verification LBA range: start 0x8000 length 0x8000 00:13:32.206 nvme2n3 : 5.88 152.29 9.52 0.00 0.00 735191.27 73400.32 871124.68 00:13:32.206 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:32.206 Verification LBA range: start 0x0 length 0x2000 00:13:32.206 nvme3n1 : 5.98 107.06 6.69 0.00 0.00 982824.68 3453.24 3019898.88 00:13:32.206 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:32.206 Verification LBA range: start 0x2000 length 0x2000 00:13:32.206 nvme3n1 : 5.89 160.33 10.02 0.00 0.00 683038.89 6503.19 858219.13 00:13:32.206 [2024-11-17T14:50:17.749Z] =================================================================================================================== 00:13:32.206 [2024-11-17T14:50:17.749Z] Total : 1415.04 88.44 0.00 0.00 980777.55 3453.24 3820043.03 00:13:33.151 00:13:33.151 real 0m7.899s 00:13:33.151 user 0m14.415s 00:13:33.151 sys 0m0.461s 00:13:33.151 ************************************ 00:13:33.151 END TEST bdev_verify_big_io 00:13:33.151 ************************************ 00:13:33.151 14:50:18 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.151 14:50:18 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.151 14:50:18 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:33.151 14:50:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:33.151 14:50:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.151 14:50:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.151 ************************************ 00:13:33.151 START TEST bdev_write_zeroes 00:13:33.151 ************************************ 00:13:33.151 14:50:18 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:33.412 [2024-11-17 14:50:18.753086] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:33.412 [2024-11-17 14:50:18.753232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70284 ] 00:13:33.412 [2024-11-17 14:50:18.918656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.672 [2024-11-17 14:50:19.040747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.933 Running I/O for 1 seconds... 00:13:35.350 88096.00 IOPS, 344.12 MiB/s 00:13:35.350 Latency(us) 00:13:35.350 [2024-11-17T14:50:20.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.350 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.350 nvme0n1 : 1.02 14321.96 55.95 0.00 0.00 8926.94 6150.30 26012.75 00:13:35.350 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.350 nvme1n1 : 1.02 15590.87 60.90 0.00 0.00 8191.79 5444.53 19559.98 00:13:35.350 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.350 nvme2n1 : 1.02 14288.48 55.81 0.00 0.00 8931.65 6175.51 20870.70 00:13:35.350 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.350 nvme2n2 : 1.02 14272.20 55.75 0.00 0.00 8890.24 6175.51 21173.17 00:13:35.350 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.350 nvme2n3 : 1.02 14255.71 55.69 0.00 0.00 8892.80 6175.51 21273.99 00:13:35.350 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.350 nvme3n1 : 1.02 14239.57 55.62 0.00 0.00 8896.01 6099.89 21475.64 00:13:35.350 [2024-11-17T14:50:20.893Z] =================================================================================================================== 00:13:35.350 [2024-11-17T14:50:20.893Z] Total : 86968.80 339.72 0.00 0.00 8779.43 5444.53 26012.75 00:13:35.934 00:13:35.934 real 0m2.612s 00:13:35.934 user 0m1.887s 00:13:35.934 sys 0m0.515s 00:13:35.934 ************************************ 00:13:35.934 END TEST bdev_write_zeroes 00:13:35.934 ************************************ 00:13:35.934 14:50:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.934 14:50:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:35.934 14:50:21 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:35.934 14:50:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:35.934 14:50:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.934 14:50:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.934 ************************************ 00:13:35.934 START TEST bdev_json_nonenclosed 00:13:35.934 ************************************ 00:13:35.934 14:50:21 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:35.934 [2024-11-17 14:50:21.432757] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:35.934 [2024-11-17 14:50:21.432897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70333 ] 00:13:36.196 [2024-11-17 14:50:21.589430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.196 [2024-11-17 14:50:21.711508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.196 [2024-11-17 14:50:21.711603] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:36.196 [2024-11-17 14:50:21.711623] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:36.196 [2024-11-17 14:50:21.711633] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:36.457 00:13:36.457 real 0m0.545s 00:13:36.457 user 0m0.315s 00:13:36.457 sys 0m0.125s 00:13:36.457 ************************************ 00:13:36.457 END TEST bdev_json_nonenclosed 00:13:36.457 ************************************ 00:13:36.457 14:50:21 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.457 14:50:21 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:36.457 14:50:21 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.457 14:50:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:36.457 14:50:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.457 14:50:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.457 ************************************ 00:13:36.457 START TEST bdev_json_nonarray 00:13:36.457 ************************************ 00:13:36.457 14:50:21 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.718 [2024-11-17 14:50:22.037583] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:36.718 [2024-11-17 14:50:22.037723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70359 ] 00:13:36.718 [2024-11-17 14:50:22.202480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.979 [2024-11-17 14:50:22.321624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.979 [2024-11-17 14:50:22.321728] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:36.979 [2024-11-17 14:50:22.321748] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:36.979 [2024-11-17 14:50:22.321759] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:36.979 00:13:36.979 real 0m0.542s 00:13:36.979 user 0m0.322s 00:13:36.979 sys 0m0.114s 00:13:36.979 ************************************ 00:13:36.979 END TEST bdev_json_nonarray 00:13:36.979 ************************************ 00:13:36.979 14:50:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.979 14:50:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:37.240 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:37.240 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:37.240 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:37.240 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:37.240 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:37.240 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:37.240 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:37.240 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:37.240 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:37.240 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:37.240 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:37.240 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:37.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:41.115 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.115 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.115 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.115 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.115 00:13:41.115 real 0m57.222s 00:13:41.115 user 1m25.877s 00:13:41.115 sys 0m33.126s 00:13:41.116 14:50:26 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.116 ************************************ 00:13:41.116 END TEST blockdev_xnvme 00:13:41.116 ************************************ 00:13:41.116 14:50:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.378 14:50:26 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:41.378 14:50:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:41.378 14:50:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.378 14:50:26 -- common/autotest_common.sh@10 -- # set +x 00:13:41.378 ************************************ 00:13:41.378 START TEST ublk 00:13:41.378 ************************************ 00:13:41.378 14:50:26 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:41.378 * Looking for test storage... 00:13:41.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:41.378 14:50:26 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:41.378 14:50:26 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:13:41.378 14:50:26 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:41.378 14:50:26 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:41.378 14:50:26 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:41.378 14:50:26 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:41.378 14:50:26 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:41.378 14:50:26 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:41.378 14:50:26 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:41.378 14:50:26 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:41.378 14:50:26 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:41.378 14:50:26 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:41.378 14:50:26 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:41.378 14:50:26 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:41.378 14:50:26 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:41.378 14:50:26 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:41.378 14:50:26 ublk -- scripts/common.sh@345 -- # : 1 00:13:41.378 14:50:26 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:41.378 14:50:26 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:41.378 14:50:26 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:41.378 14:50:26 ublk -- scripts/common.sh@353 -- # local d=1 00:13:41.378 14:50:26 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:41.378 14:50:26 ublk -- scripts/common.sh@355 -- # echo 1 00:13:41.378 14:50:26 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:41.378 14:50:26 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:41.378 14:50:26 ublk -- scripts/common.sh@353 -- # local d=2 00:13:41.378 14:50:26 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:41.378 14:50:26 ublk -- scripts/common.sh@355 -- # echo 2 00:13:41.378 14:50:26 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:41.378 14:50:26 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:41.378 14:50:26 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:41.378 14:50:26 ublk -- scripts/common.sh@368 -- # return 0 00:13:41.378 14:50:26 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:41.378 14:50:26 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:41.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.378 --rc genhtml_branch_coverage=1 00:13:41.378 --rc genhtml_function_coverage=1 00:13:41.378 --rc genhtml_legend=1 00:13:41.378 --rc geninfo_all_blocks=1 00:13:41.378 --rc geninfo_unexecuted_blocks=1 00:13:41.378 00:13:41.378 ' 00:13:41.378 14:50:26 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:41.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.378 --rc genhtml_branch_coverage=1 00:13:41.378 --rc genhtml_function_coverage=1 00:13:41.378 --rc genhtml_legend=1 00:13:41.378 --rc geninfo_all_blocks=1 00:13:41.378 --rc geninfo_unexecuted_blocks=1 00:13:41.378 00:13:41.378 ' 00:13:41.378 14:50:26 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:41.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.378 --rc genhtml_branch_coverage=1 00:13:41.378 --rc genhtml_function_coverage=1 00:13:41.378 --rc genhtml_legend=1 00:13:41.378 --rc geninfo_all_blocks=1 00:13:41.378 --rc geninfo_unexecuted_blocks=1 00:13:41.378 00:13:41.378 ' 00:13:41.378 14:50:26 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:41.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.378 --rc genhtml_branch_coverage=1 00:13:41.378 --rc genhtml_function_coverage=1 00:13:41.378 --rc genhtml_legend=1 00:13:41.378 --rc geninfo_all_blocks=1 00:13:41.378 --rc geninfo_unexecuted_blocks=1 00:13:41.378 00:13:41.378 ' 00:13:41.378 14:50:26 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:41.378 14:50:26 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:41.378 14:50:26 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:41.378 14:50:26 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:41.378 14:50:26 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:41.378 14:50:26 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:41.378 14:50:26 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:41.378 14:50:26 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:41.378 14:50:26 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:41.378 14:50:26 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:41.378 14:50:26 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:41.378 14:50:26 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:41.378 14:50:26 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:41.379 14:50:26 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:41.379 14:50:26 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:41.379 14:50:26 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:41.379 14:50:26 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:41.379 14:50:26 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:41.379 14:50:26 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:41.379 14:50:26 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:41.379 14:50:26 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:41.379 14:50:26 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.379 14:50:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:41.379 ************************************ 00:13:41.379 START TEST test_save_ublk_config 00:13:41.379 ************************************ 00:13:41.379 14:50:26 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:13:41.379 14:50:26 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:41.379 14:50:26 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70644 00:13:41.379 14:50:26 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:41.379 14:50:26 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70644 00:13:41.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.379 14:50:26 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70644 ']' 00:13:41.379 14:50:26 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:41.379 14:50:26 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.379 14:50:26 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.379 14:50:26 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.379 14:50:26 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.379 14:50:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:41.640 [2024-11-17 14:50:26.976569] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:41.640 [2024-11-17 14:50:26.976722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70644 ] 00:13:41.640 [2024-11-17 14:50:27.138850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.901 [2024-11-17 14:50:27.258981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.476 14:50:27 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.476 14:50:27 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:42.476 14:50:27 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:42.476 14:50:27 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:42.476 14:50:27 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.476 14:50:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:42.476 [2024-11-17 14:50:27.969953] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:42.476 [2024-11-17 14:50:27.970855] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:42.737 malloc0 00:13:42.737 [2024-11-17 14:50:28.042088] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:42.737 [2024-11-17 14:50:28.042181] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:42.737 [2024-11-17 14:50:28.042192] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:42.737 [2024-11-17 14:50:28.042200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:42.737 [2024-11-17 14:50:28.051053] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:42.737 [2024-11-17 14:50:28.051084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:42.737 [2024-11-17 14:50:28.057959] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:42.737 [2024-11-17 14:50:28.058085] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:42.737 [2024-11-17 14:50:28.074954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:42.737 0 00:13:42.737 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.737 14:50:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:42.737 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.737 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:43.007 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.008 14:50:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:43.008 "subsystems": [ 00:13:43.008 { 00:13:43.008 "subsystem": "fsdev", 00:13:43.008 "config": [ 00:13:43.008 { 00:13:43.008 "method": "fsdev_set_opts", 00:13:43.008 "params": { 00:13:43.008 "fsdev_io_pool_size": 65535, 00:13:43.008 "fsdev_io_cache_size": 256 00:13:43.008 } 00:13:43.008 } 00:13:43.008 ] 00:13:43.008 }, 00:13:43.008 { 00:13:43.008 "subsystem": "keyring", 00:13:43.008 "config": [] 00:13:43.008 }, 00:13:43.008 { 00:13:43.008 "subsystem": "iobuf", 00:13:43.008 "config": [ 00:13:43.008 { 00:13:43.008 "method": "iobuf_set_options", 00:13:43.008 "params": { 00:13:43.008 "small_pool_count": 8192, 00:13:43.008 "large_pool_count": 1024, 00:13:43.008 "small_bufsize": 8192, 00:13:43.008 "large_bufsize": 135168, 00:13:43.008 "enable_numa": false 00:13:43.008 } 00:13:43.008 } 00:13:43.008 ] 00:13:43.008 }, 00:13:43.008 { 00:13:43.008 "subsystem": "sock", 00:13:43.008 "config": [ 00:13:43.008 { 00:13:43.008 "method": "sock_set_default_impl", 00:13:43.008 "params": { 00:13:43.008 "impl_name": "posix" 00:13:43.008 } 00:13:43.008 }, 00:13:43.008 { 00:13:43.008 "method": "sock_impl_set_options", 00:13:43.008 "params": { 00:13:43.008 "impl_name": "ssl", 00:13:43.008 "recv_buf_size": 4096, 00:13:43.008 "send_buf_size": 4096, 00:13:43.008 "enable_recv_pipe": true, 00:13:43.008 "enable_quickack": false, 00:13:43.008 "enable_placement_id": 0, 00:13:43.008 "enable_zerocopy_send_server": true, 00:13:43.008 "enable_zerocopy_send_client": false, 00:13:43.008 "zerocopy_threshold": 0, 00:13:43.008 "tls_version": 0, 00:13:43.008 "enable_ktls": false 00:13:43.008 } 00:13:43.008 }, 00:13:43.008 { 00:13:43.008 "method": "sock_impl_set_options", 00:13:43.008 "params": { 00:13:43.008 "impl_name": "posix", 00:13:43.008 "recv_buf_size": 2097152, 00:13:43.008 "send_buf_size": 2097152, 00:13:43.008 "enable_recv_pipe": true, 00:13:43.009 "enable_quickack": false, 00:13:43.009 "enable_placement_id": 0, 00:13:43.009 "enable_zerocopy_send_server": true, 00:13:43.009 "enable_zerocopy_send_client": false, 00:13:43.009 "zerocopy_threshold": 0, 00:13:43.009 "tls_version": 0, 00:13:43.009 "enable_ktls": false 00:13:43.009 } 00:13:43.009 } 00:13:43.009 ] 00:13:43.009 }, 00:13:43.009 { 00:13:43.009 "subsystem": "vmd", 00:13:43.009 "config": [] 00:13:43.009 }, 00:13:43.009 { 00:13:43.009 "subsystem": "accel", 00:13:43.009 "config": [ 00:13:43.009 { 00:13:43.009 "method": "accel_set_options", 00:13:43.009 "params": { 00:13:43.009 "small_cache_size": 128, 00:13:43.009 "large_cache_size": 16, 00:13:43.009 "task_count": 2048, 00:13:43.009 "sequence_count": 2048, 00:13:43.009 "buf_count": 2048 00:13:43.009 } 00:13:43.009 } 00:13:43.009 ] 00:13:43.009 }, 00:13:43.009 { 00:13:43.009 "subsystem": "bdev", 00:13:43.009 "config": [ 00:13:43.009 { 00:13:43.009 "method": "bdev_set_options", 00:13:43.009 "params": { 00:13:43.009 "bdev_io_pool_size": 65535, 00:13:43.009 "bdev_io_cache_size": 256, 00:13:43.009 "bdev_auto_examine": true, 00:13:43.009 "iobuf_small_cache_size": 128, 00:13:43.009 "iobuf_large_cache_size": 16 00:13:43.010 } 00:13:43.010 }, 00:13:43.010 { 00:13:43.010 "method": "bdev_raid_set_options", 00:13:43.010 "params": { 00:13:43.010 "process_window_size_kb": 1024, 00:13:43.010 "process_max_bandwidth_mb_sec": 0 00:13:43.010 } 00:13:43.010 }, 00:13:43.010 { 00:13:43.010 "method": "bdev_iscsi_set_options", 00:13:43.010 "params": { 00:13:43.010 "timeout_sec": 30 00:13:43.010 } 00:13:43.010 }, 00:13:43.010 { 00:13:43.010 "method": "bdev_nvme_set_options", 00:13:43.010 "params": { 00:13:43.010 "action_on_timeout": "none", 00:13:43.010 "timeout_us": 0, 00:13:43.010 "timeout_admin_us": 0, 00:13:43.010 "keep_alive_timeout_ms": 10000, 00:13:43.010 "arbitration_burst": 0, 00:13:43.010 "low_priority_weight": 0, 00:13:43.010 "medium_priority_weight": 0, 00:13:43.010 "high_priority_weight": 0, 00:13:43.010 "nvme_adminq_poll_period_us": 10000, 00:13:43.010 "nvme_ioq_poll_period_us": 0, 00:13:43.010 "io_queue_requests": 0, 00:13:43.010 "delay_cmd_submit": true, 00:13:43.010 "transport_retry_count": 4, 00:13:43.010 "bdev_retry_count": 3, 00:13:43.010 "transport_ack_timeout": 0, 00:13:43.010 "ctrlr_loss_timeout_sec": 0, 00:13:43.010 "reconnect_delay_sec": 0, 00:13:43.010 "fast_io_fail_timeout_sec": 0, 00:13:43.010 "disable_auto_failback": false, 00:13:43.010 "generate_uuids": false, 00:13:43.010 "transport_tos": 0, 00:13:43.010 "nvme_error_stat": false, 00:13:43.010 "rdma_srq_size": 0, 00:13:43.010 "io_path_stat": false, 00:13:43.010 "allow_accel_sequence": false, 00:13:43.010 "rdma_max_cq_size": 0, 00:13:43.010 "rdma_cm_event_timeout_ms": 0, 00:13:43.010 "dhchap_digests": [ 00:13:43.010 "sha256", 00:13:43.011 "sha384", 00:13:43.011 "sha512" 00:13:43.011 ], 00:13:43.011 "dhchap_dhgroups": [ 00:13:43.011 "null", 00:13:43.011 "ffdhe2048", 00:13:43.011 "ffdhe3072", 00:13:43.011 "ffdhe4096", 00:13:43.011 "ffdhe6144", 00:13:43.011 "ffdhe8192" 00:13:43.011 ] 00:13:43.011 } 00:13:43.011 }, 00:13:43.011 { 00:13:43.011 "method": "bdev_nvme_set_hotplug", 00:13:43.011 "params": { 00:13:43.011 "period_us": 100000, 00:13:43.011 "enable": false 00:13:43.011 } 00:13:43.011 }, 00:13:43.011 { 00:13:43.011 "method": "bdev_malloc_create", 00:13:43.011 "params": { 00:13:43.011 "name": "malloc0", 00:13:43.011 "num_blocks": 8192, 00:13:43.011 "block_size": 4096, 00:13:43.011 "physical_block_size": 4096, 00:13:43.011 "uuid": "57cac204-c94c-4919-9232-424fc7f98c2f", 00:13:43.011 "optimal_io_boundary": 0, 00:13:43.011 "md_size": 0, 00:13:43.011 "dif_type": 0, 00:13:43.011 "dif_is_head_of_md": false, 00:13:43.011 "dif_pi_format": 0 00:13:43.011 } 00:13:43.011 }, 00:13:43.011 { 00:13:43.011 "method": "bdev_wait_for_examine" 00:13:43.011 } 00:13:43.011 ] 00:13:43.011 }, 00:13:43.011 { 00:13:43.011 "subsystem": "scsi", 00:13:43.011 "config": null 00:13:43.011 }, 00:13:43.012 { 00:13:43.012 "subsystem": "scheduler", 00:13:43.012 "config": [ 00:13:43.012 { 00:13:43.012 "method": "framework_set_scheduler", 00:13:43.012 "params": { 00:13:43.012 "name": "static" 00:13:43.012 } 00:13:43.012 } 00:13:43.012 ] 00:13:43.012 }, 00:13:43.012 { 00:13:43.012 "subsystem": "vhost_scsi", 00:13:43.012 "config": [] 00:13:43.012 }, 00:13:43.012 { 00:13:43.012 "subsystem": "vhost_blk", 00:13:43.012 "config": [] 00:13:43.012 }, 00:13:43.012 { 00:13:43.012 "subsystem": "ublk", 00:13:43.012 "config": [ 00:13:43.012 { 00:13:43.012 "method": "ublk_create_target", 00:13:43.012 "params": { 00:13:43.012 "cpumask": "1" 00:13:43.012 } 00:13:43.012 }, 00:13:43.012 { 00:13:43.012 "method": "ublk_start_disk", 00:13:43.012 "params": { 00:13:43.012 "bdev_name": "malloc0", 00:13:43.012 "ublk_id": 0, 00:13:43.012 "num_queues": 1, 00:13:43.012 "queue_depth": 128 00:13:43.012 } 00:13:43.012 } 00:13:43.012 ] 00:13:43.012 }, 00:13:43.012 { 00:13:43.012 "subsystem": "nbd", 00:13:43.012 "config": [] 00:13:43.012 }, 00:13:43.012 { 00:13:43.012 "subsystem": "nvmf", 00:13:43.012 "config": [ 00:13:43.012 { 00:13:43.012 "method": "nvmf_set_config", 00:13:43.012 "params": { 00:13:43.012 "discovery_filter": "match_any", 00:13:43.012 "admin_cmd_passthru": { 00:13:43.012 "identify_ctrlr": false 00:13:43.012 }, 00:13:43.012 "dhchap_digests": [ 00:13:43.012 "sha256", 00:13:43.012 "sha384", 00:13:43.012 "sha512" 00:13:43.012 ], 00:13:43.012 "dhchap_dhgroups": [ 00:13:43.012 "null", 00:13:43.012 "ffdhe2048", 00:13:43.012 "ffdhe3072", 00:13:43.012 "ffdhe4096", 00:13:43.012 "ffdhe6144", 00:13:43.013 "ffdhe8192" 00:13:43.013 ] 00:13:43.013 } 00:13:43.013 }, 00:13:43.013 { 00:13:43.013 "method": "nvmf_set_max_subsystems", 00:13:43.013 "params": { 00:13:43.013 "max_subsystems": 1024 00:13:43.013 } 00:13:43.013 }, 00:13:43.013 { 00:13:43.013 "method": "nvmf_set_crdt", 00:13:43.013 "params": { 00:13:43.013 "crdt1": 0, 00:13:43.013 "crdt2": 0, 00:13:43.013 "crdt3": 0 00:13:43.013 } 00:13:43.013 } 00:13:43.013 ] 00:13:43.013 }, 00:13:43.013 { 00:13:43.013 "subsystem": "iscsi", 00:13:43.013 "config": [ 00:13:43.013 { 00:13:43.013 "method": "iscsi_set_options", 00:13:43.013 "params": { 00:13:43.013 "node_base": "iqn.2016-06.io.spdk", 00:13:43.013 "max_sessions": 128, 00:13:43.013 "max_connections_per_session": 2, 00:13:43.013 "max_queue_depth": 64, 00:13:43.013 "default_time2wait": 2, 00:13:43.013 "default_time2retain": 20, 00:13:43.013 "first_burst_length": 8192, 00:13:43.013 "immediate_data": true, 00:13:43.013 "allow_duplicated_isid": false, 00:13:43.013 "error_recovery_level": 0, 00:13:43.013 "nop_timeout": 60, 00:13:43.013 "nop_in_interval": 30, 00:13:43.013 "disable_chap": false, 00:13:43.014 "require_chap": false, 00:13:43.014 "mutual_chap": false, 00:13:43.014 "chap_group": 0, 00:13:43.014 "max_large_datain_per_connection": 64, 00:13:43.014 "max_r2t_per_connection": 4, 00:13:43.014 "pdu_pool_size": 36864, 00:13:43.014 "immediate_data_pool_size": 16384, 00:13:43.014 "data_out_pool_size": 2048 00:13:43.014 } 00:13:43.014 } 00:13:43.014 ] 00:13:43.014 } 00:13:43.014 ] 00:13:43.014 }' 00:13:43.014 14:50:28 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70644 00:13:43.014 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70644 ']' 00:13:43.014 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70644 00:13:43.014 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:43.014 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.014 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70644 00:13:43.014 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:43.014 killing process with pid 70644 00:13:43.014 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:43.014 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70644' 00:13:43.014 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70644 00:13:43.014 14:50:28 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70644 00:13:43.960 [2024-11-17 14:50:29.488380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:44.223 [2024-11-17 14:50:29.531967] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:44.223 [2024-11-17 14:50:29.532140] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:44.223 [2024-11-17 14:50:29.545942] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:44.223 [2024-11-17 14:50:29.546004] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:44.223 [2024-11-17 14:50:29.546019] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:44.223 [2024-11-17 14:50:29.546059] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:44.223 [2024-11-17 14:50:29.546217] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:46.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.139 14:50:31 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70707 00:13:46.139 14:50:31 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70707 00:13:46.139 14:50:31 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70707 ']' 00:13:46.139 14:50:31 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.139 14:50:31 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.139 14:50:31 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.139 14:50:31 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.139 14:50:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:46.139 14:50:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:46.139 14:50:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:46.139 "subsystems": [ 00:13:46.139 { 00:13:46.139 "subsystem": "fsdev", 00:13:46.139 "config": [ 00:13:46.139 { 00:13:46.139 "method": "fsdev_set_opts", 00:13:46.139 "params": { 00:13:46.139 "fsdev_io_pool_size": 65535, 00:13:46.139 "fsdev_io_cache_size": 256 00:13:46.139 } 00:13:46.139 } 00:13:46.140 ] 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "subsystem": "keyring", 00:13:46.140 "config": [] 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "subsystem": "iobuf", 00:13:46.140 "config": [ 00:13:46.140 { 00:13:46.140 "method": "iobuf_set_options", 00:13:46.140 "params": { 00:13:46.140 "small_pool_count": 8192, 00:13:46.140 "large_pool_count": 1024, 00:13:46.140 "small_bufsize": 8192, 00:13:46.140 "large_bufsize": 135168, 00:13:46.140 "enable_numa": false 00:13:46.140 } 00:13:46.140 } 00:13:46.140 ] 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "subsystem": "sock", 00:13:46.140 "config": [ 00:13:46.140 { 00:13:46.140 "method": "sock_set_default_impl", 00:13:46.140 "params": { 00:13:46.140 "impl_name": "posix" 00:13:46.140 } 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "method": "sock_impl_set_options", 00:13:46.140 "params": { 00:13:46.140 "impl_name": "ssl", 00:13:46.140 "recv_buf_size": 4096, 00:13:46.140 "send_buf_size": 4096, 00:13:46.140 "enable_recv_pipe": true, 00:13:46.140 "enable_quickack": false, 00:13:46.140 "enable_placement_id": 0, 00:13:46.140 "enable_zerocopy_send_server": true, 00:13:46.140 "enable_zerocopy_send_client": false, 00:13:46.140 "zerocopy_threshold": 0, 00:13:46.140 "tls_version": 0, 00:13:46.140 "enable_ktls": false 00:13:46.140 } 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "method": "sock_impl_set_options", 00:13:46.140 "params": { 00:13:46.140 "impl_name": "posix", 00:13:46.140 "recv_buf_size": 2097152, 00:13:46.140 "send_buf_size": 2097152, 00:13:46.140 "enable_recv_pipe": true, 00:13:46.140 "enable_quickack": false, 00:13:46.140 "enable_placement_id": 0, 00:13:46.140 "enable_zerocopy_send_server": true, 00:13:46.140 "enable_zerocopy_send_client": false, 00:13:46.140 "zerocopy_threshold": 0, 00:13:46.140 "tls_version": 0, 00:13:46.140 "enable_ktls": false 00:13:46.140 } 00:13:46.140 } 00:13:46.140 ] 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "subsystem": "vmd", 00:13:46.140 "config": [] 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "subsystem": "accel", 00:13:46.140 "config": [ 00:13:46.140 { 00:13:46.140 "method": "accel_set_options", 00:13:46.140 "params": { 00:13:46.140 "small_cache_size": 128, 00:13:46.140 "large_cache_size": 16, 00:13:46.140 "task_count": 2048, 00:13:46.140 "sequence_count": 2048, 00:13:46.140 "buf_count": 2048 00:13:46.140 } 00:13:46.140 } 00:13:46.140 ] 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "subsystem": "bdev", 00:13:46.140 "config": [ 00:13:46.140 { 00:13:46.140 "method": "bdev_set_options", 00:13:46.140 "params": { 00:13:46.140 "bdev_io_pool_size": 65535, 00:13:46.140 "bdev_io_cache_size": 256, 00:13:46.140 "bdev_auto_examine": true, 00:13:46.140 "iobuf_small_cache_size": 128, 00:13:46.140 "iobuf_large_cache_size": 16 00:13:46.140 } 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "method": "bdev_raid_set_options", 00:13:46.140 "params": { 00:13:46.140 "process_window_size_kb": 1024, 00:13:46.140 "process_max_bandwidth_mb_sec": 0 00:13:46.140 } 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "method": "bdev_iscsi_set_options", 00:13:46.140 "params": { 00:13:46.140 "timeout_sec": 30 00:13:46.140 } 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "method": "bdev_nvme_set_options", 00:13:46.140 "params": { 00:13:46.140 "action_on_timeout": "none", 00:13:46.140 "timeout_us": 0, 00:13:46.140 "timeout_admin_us": 0, 00:13:46.140 "keep_alive_timeout_ms": 10000, 00:13:46.140 "arbitration_burst": 0, 00:13:46.140 "low_priority_weight": 0, 00:13:46.140 "medium_priority_weight": 0, 00:13:46.140 "high_priority_weight": 0, 00:13:46.140 "nvme_adminq_poll_period_us": 10000, 00:13:46.140 "nvme_ioq_poll_period_us": 0, 00:13:46.140 "io_queue_requests": 0, 00:13:46.140 "delay_cmd_submit": true, 00:13:46.140 "transport_retry_count": 4, 00:13:46.140 "bdev_retry_count": 3, 00:13:46.140 "transport_ack_timeout": 0, 00:13:46.140 "ctrlr_loss_timeout_sec": 0, 00:13:46.140 "reconnect_delay_sec": 0, 00:13:46.140 "fast_io_fail_timeout_sec": 0, 00:13:46.140 "disable_auto_failback": false, 00:13:46.140 "generate_uuids": false, 00:13:46.140 "transport_tos": 0, 00:13:46.140 "nvme_error_stat": false, 00:13:46.140 "rdma_srq_size": 0, 00:13:46.140 "io_path_stat": false, 00:13:46.140 "allow_accel_sequence": false, 00:13:46.140 "rdma_max_cq_size": 0, 00:13:46.140 "rdma_cm_event_timeout_ms": 0, 00:13:46.140 "dhchap_digests": [ 00:13:46.140 "sha256", 00:13:46.140 "sha384", 00:13:46.140 "sha512" 00:13:46.140 ], 00:13:46.140 "dhchap_dhgroups": [ 00:13:46.140 "null", 00:13:46.140 "ffdhe2048", 00:13:46.140 "ffdhe3072", 00:13:46.140 "ffdhe4096", 00:13:46.140 "ffdhe6144", 00:13:46.140 "ffdhe8192" 00:13:46.140 ] 00:13:46.140 } 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "method": "bdev_nvme_set_hotplug", 00:13:46.140 "params": { 00:13:46.140 "period_us": 100000, 00:13:46.140 "enable": false 00:13:46.140 } 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "method": "bdev_malloc_create", 00:13:46.140 "params": { 00:13:46.140 "name": "malloc0", 00:13:46.140 "num_blocks": 8192, 00:13:46.140 "block_size": 4096, 00:13:46.140 "physical_block_size": 4096, 00:13:46.140 "uuid": "57cac204-c94c-4919-9232-424fc7f98c2f", 00:13:46.140 "optimal_io_boundary": 0, 00:13:46.140 "md_size": 0, 00:13:46.140 "dif_type": 0, 00:13:46.140 "dif_is_head_of_md": false, 00:13:46.140 "dif_pi_format": 0 00:13:46.140 } 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "method": "bdev_wait_for_examine" 00:13:46.140 } 00:13:46.140 ] 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "subsystem": "scsi", 00:13:46.140 "config": null 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "subsystem": "scheduler", 00:13:46.140 "config": [ 00:13:46.140 { 00:13:46.140 "method": "framework_set_scheduler", 00:13:46.140 "params": { 00:13:46.140 "name": "static" 00:13:46.140 } 00:13:46.140 } 00:13:46.140 ] 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "subsystem": "vhost_scsi", 00:13:46.140 "config": [] 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "subsystem": "vhost_blk", 00:13:46.140 "config": [] 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "subsystem": "ublk", 00:13:46.140 "config": [ 00:13:46.140 { 00:13:46.140 "method": "ublk_create_target", 00:13:46.140 "params": { 00:13:46.140 "cpumask": "1" 00:13:46.140 } 00:13:46.140 }, 00:13:46.140 { 00:13:46.140 "method": "ublk_start_disk", 00:13:46.141 "params": { 00:13:46.141 "bdev_name": "malloc0", 00:13:46.141 "ublk_id": 0, 00:13:46.141 "num_queues": 1, 00:13:46.141 "queue_depth": 128 00:13:46.141 } 00:13:46.141 } 00:13:46.141 ] 00:13:46.141 }, 00:13:46.141 { 00:13:46.141 "subsystem": "nbd", 00:13:46.141 "config": [] 00:13:46.141 }, 00:13:46.141 { 00:13:46.141 "subsystem": "nvmf", 00:13:46.141 "config": [ 00:13:46.141 { 00:13:46.141 "method": "nvmf_set_config", 00:13:46.141 "params": { 00:13:46.141 "discovery_filter": "match_any", 00:13:46.141 "admin_cmd_passthru": { 00:13:46.141 "identify_ctrlr": false 00:13:46.141 }, 00:13:46.141 "dhchap_digests": [ 00:13:46.141 "sha256", 00:13:46.141 "sha384", 00:13:46.141 "sha512" 00:13:46.141 ], 00:13:46.141 "dhchap_dhgroups": [ 00:13:46.141 "null", 00:13:46.141 "ffdhe2048", 00:13:46.141 "ffdhe3072", 00:13:46.141 "ffdhe4096", 00:13:46.141 "ffdhe6144", 00:13:46.141 "ffdhe8192" 00:13:46.141 ] 00:13:46.141 } 00:13:46.141 }, 00:13:46.141 { 00:13:46.141 "method": "nvmf_set_max_subsystems", 00:13:46.141 "params": { 00:13:46.141 "max_subsystems": 1024 00:13:46.141 } 00:13:46.141 }, 00:13:46.141 { 00:13:46.141 "method": "nvmf_set_crdt", 00:13:46.141 "params": { 00:13:46.141 "crdt1": 0, 00:13:46.141 "crdt2": 0, 00:13:46.141 "crdt3": 0 00:13:46.141 } 00:13:46.141 } 00:13:46.141 ] 00:13:46.141 }, 00:13:46.141 { 00:13:46.141 "subsystem": "iscsi", 00:13:46.141 "config": [ 00:13:46.141 { 00:13:46.141 "method": "iscsi_set_options", 00:13:46.141 "params": { 00:13:46.141 "node_base": "iqn.2016-06.io.spdk", 00:13:46.141 "max_sessions": 128, 00:13:46.141 "max_connections_per_session": 2, 00:13:46.141 "max_queue_depth": 64, 00:13:46.141 "default_time2wait": 2, 00:13:46.141 "default_time2retain": 20, 00:13:46.141 "first_burst_length": 8192, 00:13:46.141 "immediate_data": true, 00:13:46.141 "allow_duplicated_isid": false, 00:13:46.141 "error_recovery_level": 0, 00:13:46.141 "nop_timeout": 60, 00:13:46.141 "nop_in_interval": 30, 00:13:46.141 "disable_chap": false, 00:13:46.141 "require_chap": false, 00:13:46.141 "mutual_chap": false, 00:13:46.141 "chap_group": 0, 00:13:46.141 "max_large_datain_per_connection": 64, 00:13:46.141 "max_r2t_per_connection": 4, 00:13:46.141 "pdu_pool_size": 36864, 00:13:46.141 "immediate_data_pool_size": 16384, 00:13:46.141 "data_out_pool_size": 2048 00:13:46.141 } 00:13:46.141 } 00:13:46.141 ] 00:13:46.141 } 00:13:46.141 ] 00:13:46.141 }' 00:13:46.141 [2024-11-17 14:50:31.336135] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:46.141 [2024-11-17 14:50:31.336257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70707 ] 00:13:46.141 [2024-11-17 14:50:31.490946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.141 [2024-11-17 14:50:31.567095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.713 [2024-11-17 14:50:32.201936] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:46.713 [2024-11-17 14:50:32.202572] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:46.713 [2024-11-17 14:50:32.210025] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:46.713 [2024-11-17 14:50:32.210083] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:46.713 [2024-11-17 14:50:32.210090] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:46.713 [2024-11-17 14:50:32.210096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:46.713 [2024-11-17 14:50:32.218990] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:46.713 [2024-11-17 14:50:32.219007] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:46.713 [2024-11-17 14:50:32.225941] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:46.713 [2024-11-17 14:50:32.226009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:46.713 [2024-11-17 14:50:32.241976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70707 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70707 ']' 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70707 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70707 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.974 killing process with pid 70707 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70707' 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70707 00:13:46.974 14:50:32 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70707 00:13:47.916 [2024-11-17 14:50:33.328032] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:47.916 [2024-11-17 14:50:33.367001] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:47.916 [2024-11-17 14:50:33.367096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:47.916 [2024-11-17 14:50:33.373936] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:47.916 [2024-11-17 14:50:33.373980] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:47.916 [2024-11-17 14:50:33.373986] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:47.916 [2024-11-17 14:50:33.374006] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:47.916 [2024-11-17 14:50:33.374112] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:49.300 14:50:34 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:49.300 00:13:49.300 real 0m7.788s 00:13:49.300 user 0m5.081s 00:13:49.300 sys 0m3.342s 00:13:49.300 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.300 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:49.300 ************************************ 00:13:49.300 END TEST test_save_ublk_config 00:13:49.300 ************************************ 00:13:49.300 14:50:34 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70780 00:13:49.300 14:50:34 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:49.300 14:50:34 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70780 00:13:49.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.300 14:50:34 ublk -- common/autotest_common.sh@835 -- # '[' -z 70780 ']' 00:13:49.300 14:50:34 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.300 14:50:34 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:49.300 14:50:34 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.300 14:50:34 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.300 14:50:34 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.300 14:50:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.300 [2024-11-17 14:50:34.795716] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:49.300 [2024-11-17 14:50:34.795836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70780 ] 00:13:49.561 [2024-11-17 14:50:34.950233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:49.561 [2024-11-17 14:50:35.027782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.561 [2024-11-17 14:50:35.027846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.132 14:50:35 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.132 14:50:35 ublk -- common/autotest_common.sh@868 -- # return 0 00:13:50.132 14:50:35 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:50.132 14:50:35 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:50.132 14:50:35 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.132 14:50:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.132 ************************************ 00:13:50.132 START TEST test_create_ublk 00:13:50.132 ************************************ 00:13:50.132 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:13:50.132 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:50.132 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.132 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.132 [2024-11-17 14:50:35.639936] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:50.132 [2024-11-17 14:50:35.641400] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:50.132 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.132 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:50.132 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:50.132 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.132 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.393 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.393 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:50.393 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:50.393 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.393 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.393 [2024-11-17 14:50:35.792043] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:50.393 [2024-11-17 14:50:35.792337] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:50.393 [2024-11-17 14:50:35.792350] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:50.393 [2024-11-17 14:50:35.792356] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:50.393 [2024-11-17 14:50:35.801099] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:50.393 [2024-11-17 14:50:35.801117] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:50.393 [2024-11-17 14:50:35.807944] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:50.393 [2024-11-17 14:50:35.816979] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:50.393 [2024-11-17 14:50:35.832944] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:50.393 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.393 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:50.393 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:50.393 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:50.393 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.393 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.393 14:50:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.393 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:50.393 { 00:13:50.393 "ublk_device": "/dev/ublkb0", 00:13:50.393 "id": 0, 00:13:50.393 "queue_depth": 512, 00:13:50.393 "num_queues": 4, 00:13:50.393 "bdev_name": "Malloc0" 00:13:50.393 } 00:13:50.393 ]' 00:13:50.393 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:50.393 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:50.393 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:50.393 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:50.393 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:50.655 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:50.655 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:50.655 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:50.655 14:50:35 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:50.655 14:50:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:50.655 14:50:36 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:50.655 14:50:36 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:50.655 14:50:36 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:50.655 14:50:36 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:50.655 14:50:36 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:50.655 14:50:36 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:50.655 14:50:36 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:50.655 14:50:36 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:50.655 14:50:36 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:50.655 14:50:36 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:50.655 14:50:36 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:50.655 14:50:36 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:50.655 fio: verification read phase will never start because write phase uses all of runtime 00:13:50.655 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:50.655 fio-3.35 00:13:50.655 Starting 1 process 00:14:02.883 00:14:02.883 fio_test: (groupid=0, jobs=1): err= 0: pid=70820: Sun Nov 17 14:50:46 2024 00:14:02.883 write: IOPS=15.1k, BW=58.9MiB/s (61.8MB/s)(589MiB/10001msec); 0 zone resets 00:14:02.883 clat (usec): min=35, max=8010, avg=65.56, stdev=117.36 00:14:02.883 lat (usec): min=36, max=8010, avg=65.98, stdev=117.37 00:14:02.883 clat percentiles (usec): 00:14:02.883 | 1.00th=[ 43], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 56], 00:14:02.883 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 63], 00:14:02.883 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 70], 95.00th=[ 74], 00:14:02.883 | 99.00th=[ 85], 99.50th=[ 95], 99.90th=[ 2704], 99.95th=[ 3359], 00:14:02.883 | 99.99th=[ 3687] 00:14:02.883 bw ( KiB/s): min=33776, max=74680, per=100.00%, avg=60415.58, stdev=7913.48, samples=19 00:14:02.883 iops : min= 8444, max=18670, avg=15103.89, stdev=1978.37, samples=19 00:14:02.883 lat (usec) : 50=11.03%, 100=88.56%, 250=0.20%, 500=0.02%, 750=0.01% 00:14:02.883 lat (usec) : 1000=0.01% 00:14:02.883 lat (msec) : 2=0.04%, 4=0.13%, 10=0.01% 00:14:02.883 cpu : usr=2.49%, sys=11.72%, ctx=150841, majf=0, minf=797 00:14:02.883 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.883 issued rwts: total=0,150841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.883 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:02.883 00:14:02.883 Run status group 0 (all jobs): 00:14:02.884 WRITE: bw=58.9MiB/s (61.8MB/s), 58.9MiB/s-58.9MiB/s (61.8MB/s-61.8MB/s), io=589MiB (618MB), run=10001-10001msec 00:14:02.884 00:14:02.884 Disk stats (read/write): 00:14:02.884 ublkb0: ios=0/149323, merge=0/0, ticks=0/8566, in_queue=8567, util=99.09% 00:14:02.884 14:50:46 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 [2024-11-17 14:50:46.253156] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:02.884 [2024-11-17 14:50:46.298982] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:02.884 [2024-11-17 14:50:46.299668] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:02.884 [2024-11-17 14:50:46.306961] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:02.884 [2024-11-17 14:50:46.307210] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:02.884 [2024-11-17 14:50:46.307224] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.884 14:50:46 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 [2024-11-17 14:50:46.323005] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:02.884 request: 00:14:02.884 { 00:14:02.884 "ublk_id": 0, 00:14:02.884 "method": "ublk_stop_disk", 00:14:02.884 "req_id": 1 00:14:02.884 } 00:14:02.884 Got JSON-RPC error response 00:14:02.884 response: 00:14:02.884 { 00:14:02.884 "code": -19, 00:14:02.884 "message": "No such device" 00:14:02.884 } 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.884 14:50:46 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 [2024-11-17 14:50:46.338998] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:02.884 [2024-11-17 14:50:46.346933] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:02.884 [2024-11-17 14:50:46.346965] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.884 14:50:46 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.884 14:50:46 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:02.884 14:50:46 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.884 14:50:46 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:02.884 14:50:46 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:02.884 14:50:46 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:02.884 14:50:46 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.884 14:50:46 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:02.884 14:50:46 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:02.884 14:50:46 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:02.884 00:14:02.884 real 0m11.157s 00:14:02.884 user 0m0.550s 00:14:02.884 sys 0m1.245s 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.884 14:50:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 ************************************ 00:14:02.884 END TEST test_create_ublk 00:14:02.884 ************************************ 00:14:02.884 14:50:46 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:02.884 14:50:46 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:02.884 14:50:46 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.884 14:50:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 ************************************ 00:14:02.884 START TEST test_create_multi_ublk 00:14:02.884 ************************************ 00:14:02.884 14:50:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:14:02.884 14:50:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:02.884 14:50:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.884 14:50:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 [2024-11-17 14:50:46.838934] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:02.884 [2024-11-17 14:50:46.840404] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:02.884 14:50:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.884 14:50:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:02.884 14:50:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:02.884 14:50:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.884 14:50:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:02.884 14:50:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.884 14:50:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 [2024-11-17 14:50:47.055040] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:02.884 [2024-11-17 14:50:47.055340] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:02.884 [2024-11-17 14:50:47.055351] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:02.884 [2024-11-17 14:50:47.055360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:02.884 [2024-11-17 14:50:47.074952] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:02.884 [2024-11-17 14:50:47.074973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:02.884 [2024-11-17 14:50:47.086942] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:02.884 [2024-11-17 14:50:47.087429] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:02.884 [2024-11-17 14:50:47.110942] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.884 [2024-11-17 14:50:47.335048] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:02.884 [2024-11-17 14:50:47.335340] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:02.884 [2024-11-17 14:50:47.335352] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:02.884 [2024-11-17 14:50:47.335357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:02.884 [2024-11-17 14:50:47.342953] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:02.884 [2024-11-17 14:50:47.342972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:02.884 [2024-11-17 14:50:47.350948] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:02.884 [2024-11-17 14:50:47.351436] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:02.884 [2024-11-17 14:50:47.359963] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.884 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 [2024-11-17 14:50:47.519026] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:02.885 [2024-11-17 14:50:47.519322] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:02.885 [2024-11-17 14:50:47.519333] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:02.885 [2024-11-17 14:50:47.519339] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:02.885 [2024-11-17 14:50:47.526952] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:02.885 [2024-11-17 14:50:47.526972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:02.885 [2024-11-17 14:50:47.534951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:02.885 [2024-11-17 14:50:47.535441] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:02.885 [2024-11-17 14:50:47.543970] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 [2024-11-17 14:50:47.703051] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:02.885 [2024-11-17 14:50:47.703346] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:02.885 [2024-11-17 14:50:47.703359] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:02.885 [2024-11-17 14:50:47.703364] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:02.885 [2024-11-17 14:50:47.712116] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:02.885 [2024-11-17 14:50:47.712133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:02.885 [2024-11-17 14:50:47.718959] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:02.885 [2024-11-17 14:50:47.719438] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:02.885 [2024-11-17 14:50:47.731958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:02.885 { 00:14:02.885 "ublk_device": "/dev/ublkb0", 00:14:02.885 "id": 0, 00:14:02.885 "queue_depth": 512, 00:14:02.885 "num_queues": 4, 00:14:02.885 "bdev_name": "Malloc0" 00:14:02.885 }, 00:14:02.885 { 00:14:02.885 "ublk_device": "/dev/ublkb1", 00:14:02.885 "id": 1, 00:14:02.885 "queue_depth": 512, 00:14:02.885 "num_queues": 4, 00:14:02.885 "bdev_name": "Malloc1" 00:14:02.885 }, 00:14:02.885 { 00:14:02.885 "ublk_device": "/dev/ublkb2", 00:14:02.885 "id": 2, 00:14:02.885 "queue_depth": 512, 00:14:02.885 "num_queues": 4, 00:14:02.885 "bdev_name": "Malloc2" 00:14:02.885 }, 00:14:02.885 { 00:14:02.885 "ublk_device": "/dev/ublkb3", 00:14:02.885 "id": 3, 00:14:02.885 "queue_depth": 512, 00:14:02.885 "num_queues": 4, 00:14:02.885 "bdev_name": "Malloc3" 00:14:02.885 } 00:14:02.885 ]' 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:02.885 14:50:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.885 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.885 [2024-11-17 14:50:48.403004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:03.144 [2024-11-17 14:50:48.435488] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:03.144 [2024-11-17 14:50:48.436622] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:03.144 [2024-11-17 14:50:48.446946] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:03.144 [2024-11-17 14:50:48.447190] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:03.144 [2024-11-17 14:50:48.447203] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.144 [2024-11-17 14:50:48.462999] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:03.144 [2024-11-17 14:50:48.501940] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:03.144 [2024-11-17 14:50:48.502789] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:03.144 [2024-11-17 14:50:48.506940] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:03.144 [2024-11-17 14:50:48.507198] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:03.144 [2024-11-17 14:50:48.507212] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.144 [2024-11-17 14:50:48.515009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:03.144 [2024-11-17 14:50:48.545456] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:03.144 [2024-11-17 14:50:48.546520] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:03.144 [2024-11-17 14:50:48.550957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:03.144 [2024-11-17 14:50:48.551184] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:03.144 [2024-11-17 14:50:48.551196] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.144 [2024-11-17 14:50:48.563996] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:03.144 [2024-11-17 14:50:48.604438] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:03.144 [2024-11-17 14:50:48.605494] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:03.144 [2024-11-17 14:50:48.609991] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:03.144 [2024-11-17 14:50:48.610227] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:03.144 [2024-11-17 14:50:48.610239] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.144 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:03.403 [2024-11-17 14:50:48.801986] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:03.403 [2024-11-17 14:50:48.809936] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:03.403 [2024-11-17 14:50:48.809965] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:03.403 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:03.403 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:03.403 14:50:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:03.403 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.403 14:50:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.661 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.661 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:03.661 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:03.661 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.661 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.228 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.228 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.228 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:04.228 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.228 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.228 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.228 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.228 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:04.228 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.228 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:04.487 14:50:49 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:04.487 14:50:50 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:04.487 00:14:04.487 real 0m3.185s 00:14:04.487 user 0m0.836s 00:14:04.487 sys 0m0.124s 00:14:04.487 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.487 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.487 ************************************ 00:14:04.487 END TEST test_create_multi_ublk 00:14:04.487 ************************************ 00:14:04.745 14:50:50 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:04.745 14:50:50 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:04.745 14:50:50 ublk -- ublk/ublk.sh@130 -- # killprocess 70780 00:14:04.745 14:50:50 ublk -- common/autotest_common.sh@954 -- # '[' -z 70780 ']' 00:14:04.745 14:50:50 ublk -- common/autotest_common.sh@958 -- # kill -0 70780 00:14:04.745 14:50:50 ublk -- common/autotest_common.sh@959 -- # uname 00:14:04.745 14:50:50 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.745 14:50:50 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70780 00:14:04.745 killing process with pid 70780 00:14:04.745 14:50:50 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.745 14:50:50 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.745 14:50:50 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70780' 00:14:04.745 14:50:50 ublk -- common/autotest_common.sh@973 -- # kill 70780 00:14:04.745 14:50:50 ublk -- common/autotest_common.sh@978 -- # wait 70780 00:14:05.311 [2024-11-17 14:50:50.594221] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:05.311 [2024-11-17 14:50:50.594273] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:05.879 00:14:05.879 real 0m24.542s 00:14:05.879 user 0m35.157s 00:14:05.879 sys 0m8.999s 00:14:05.879 14:50:51 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.879 14:50:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.879 ************************************ 00:14:05.879 END TEST ublk 00:14:05.879 ************************************ 00:14:05.879 14:50:51 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:05.879 14:50:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.879 14:50:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.879 14:50:51 -- common/autotest_common.sh@10 -- # set +x 00:14:05.879 ************************************ 00:14:05.879 START TEST ublk_recovery 00:14:05.879 ************************************ 00:14:05.879 14:50:51 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:05.879 * Looking for test storage... 00:14:05.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:05.879 14:50:51 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:05.879 14:50:51 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:05.879 14:50:51 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:06.139 14:50:51 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:06.139 14:50:51 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.139 14:50:51 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.139 14:50:51 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.139 14:50:51 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.140 14:50:51 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:06.140 14:50:51 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.140 14:50:51 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:06.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.140 --rc genhtml_branch_coverage=1 00:14:06.140 --rc genhtml_function_coverage=1 00:14:06.140 --rc genhtml_legend=1 00:14:06.140 --rc geninfo_all_blocks=1 00:14:06.140 --rc geninfo_unexecuted_blocks=1 00:14:06.140 00:14:06.140 ' 00:14:06.140 14:50:51 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:06.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.140 --rc genhtml_branch_coverage=1 00:14:06.140 --rc genhtml_function_coverage=1 00:14:06.140 --rc genhtml_legend=1 00:14:06.140 --rc geninfo_all_blocks=1 00:14:06.140 --rc geninfo_unexecuted_blocks=1 00:14:06.140 00:14:06.140 ' 00:14:06.140 14:50:51 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:06.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.140 --rc genhtml_branch_coverage=1 00:14:06.140 --rc genhtml_function_coverage=1 00:14:06.140 --rc genhtml_legend=1 00:14:06.140 --rc geninfo_all_blocks=1 00:14:06.140 --rc geninfo_unexecuted_blocks=1 00:14:06.140 00:14:06.140 ' 00:14:06.140 14:50:51 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:06.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.140 --rc genhtml_branch_coverage=1 00:14:06.140 --rc genhtml_function_coverage=1 00:14:06.140 --rc genhtml_legend=1 00:14:06.140 --rc geninfo_all_blocks=1 00:14:06.140 --rc geninfo_unexecuted_blocks=1 00:14:06.140 00:14:06.140 ' 00:14:06.140 14:50:51 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:06.140 14:50:51 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:06.140 14:50:51 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:06.140 14:50:51 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:06.140 14:50:51 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:06.140 14:50:51 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:06.140 14:50:51 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:06.140 14:50:51 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:06.140 14:50:51 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:06.140 14:50:51 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:06.140 14:50:51 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71164 00:14:06.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.140 14:50:51 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:06.140 14:50:51 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:06.140 14:50:51 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71164 00:14:06.140 14:50:51 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71164 ']' 00:14:06.140 14:50:51 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.140 14:50:51 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.140 14:50:51 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.140 14:50:51 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.140 14:50:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.140 [2024-11-17 14:50:51.526022] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:06.140 [2024-11-17 14:50:51.526261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71164 ] 00:14:06.140 [2024-11-17 14:50:51.676150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:06.398 [2024-11-17 14:50:51.752399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.398 [2024-11-17 14:50:51.752473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.965 14:50:52 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.965 14:50:52 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:06.965 14:50:52 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:06.965 14:50:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.965 14:50:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.965 [2024-11-17 14:50:52.317943] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:06.965 [2024-11-17 14:50:52.319494] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:06.965 14:50:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.965 14:50:52 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:06.965 14:50:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.965 14:50:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.965 malloc0 00:14:06.965 14:50:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.965 14:50:52 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:06.965 14:50:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.965 14:50:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.965 [2024-11-17 14:50:52.398123] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:06.965 [2024-11-17 14:50:52.398219] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:06.965 [2024-11-17 14:50:52.398234] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:06.965 [2024-11-17 14:50:52.398244] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:06.965 [2024-11-17 14:50:52.406067] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:06.965 [2024-11-17 14:50:52.406087] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:06.965 [2024-11-17 14:50:52.413958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:06.965 [2024-11-17 14:50:52.414101] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:06.965 [2024-11-17 14:50:52.424952] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:06.965 1 00:14:06.965 14:50:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.965 14:50:52 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:07.899 14:50:53 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71199 00:14:07.899 14:50:53 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:07.899 14:50:53 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:08.157 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.157 fio-3.35 00:14:08.157 Starting 1 process 00:14:13.423 14:50:58 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71164 00:14:13.423 14:50:58 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:18.770 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71164 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:18.770 14:51:03 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71310 00:14:18.770 14:51:03 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:18.770 14:51:03 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71310 00:14:18.770 14:51:03 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:18.770 14:51:03 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71310 ']' 00:14:18.770 14:51:03 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.770 14:51:03 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.770 14:51:03 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.770 14:51:03 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.770 14:51:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.770 [2024-11-17 14:51:03.529250] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:18.770 [2024-11-17 14:51:03.529377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71310 ] 00:14:18.770 [2024-11-17 14:51:03.683625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:18.770 [2024-11-17 14:51:03.782685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.770 [2024-11-17 14:51:03.782791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.063 14:51:04 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.063 14:51:04 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:19.063 14:51:04 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:19.064 14:51:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.064 14:51:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:19.064 [2024-11-17 14:51:04.369948] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:19.064 [2024-11-17 14:51:04.371809] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:19.064 14:51:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.064 14:51:04 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:19.064 14:51:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.064 14:51:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:19.064 malloc0 00:14:19.064 14:51:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.064 14:51:04 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:19.064 14:51:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.064 14:51:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:19.064 [2024-11-17 14:51:04.474087] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:19.064 [2024-11-17 14:51:04.474126] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:19.064 [2024-11-17 14:51:04.474136] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:19.064 [2024-11-17 14:51:04.481978] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:19.064 [2024-11-17 14:51:04.482000] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:19.064 [2024-11-17 14:51:04.482009] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:19.064 [2024-11-17 14:51:04.482085] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:19.064 1 00:14:19.064 14:51:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.064 14:51:04 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71199 00:14:19.064 [2024-11-17 14:51:04.489954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:19.064 [2024-11-17 14:51:04.496480] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:19.064 [2024-11-17 14:51:04.504143] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:19.064 [2024-11-17 14:51:04.504166] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:15.286 00:15:15.286 fio_test: (groupid=0, jobs=1): err= 0: pid=71202: Sun Nov 17 14:51:53 2024 00:15:15.286 read: IOPS=27.0k, BW=105MiB/s (110MB/s)(6319MiB/60002msec) 00:15:15.286 slat (nsec): min=1044, max=703030, avg=4915.29, stdev=1469.07 00:15:15.286 clat (usec): min=481, max=6077.0k, avg=2319.06, stdev=37300.61 00:15:15.286 lat (usec): min=485, max=6077.0k, avg=2323.98, stdev=37300.61 00:15:15.286 clat percentiles (usec): 00:15:15.286 | 1.00th=[ 1762], 5.00th=[ 1893], 10.00th=[ 1909], 20.00th=[ 1942], 00:15:15.286 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 1975], 60.00th=[ 1991], 00:15:15.286 | 70.00th=[ 2008], 80.00th=[ 2024], 90.00th=[ 2073], 95.00th=[ 2802], 00:15:15.286 | 99.00th=[ 4621], 99.50th=[ 5538], 99.90th=[ 6652], 99.95th=[ 7439], 00:15:15.286 | 99.99th=[12911] 00:15:15.286 bw ( KiB/s): min=13392, max=123928, per=100.00%, avg=118780.81, stdev=14213.64, samples=108 00:15:15.286 iops : min= 3348, max=30982, avg=29695.20, stdev=3553.41, samples=108 00:15:15.286 write: IOPS=26.9k, BW=105MiB/s (110MB/s)(6314MiB/60002msec); 0 zone resets 00:15:15.286 slat (nsec): min=1063, max=1361.8k, avg=4945.63, stdev=1798.35 00:15:15.286 clat (usec): min=495, max=6077.2k, avg=2419.90, stdev=39105.64 00:15:15.286 lat (usec): min=499, max=6077.2k, avg=2424.85, stdev=39105.63 00:15:15.286 clat percentiles (usec): 00:15:15.286 | 1.00th=[ 1811], 5.00th=[ 1975], 10.00th=[ 2008], 20.00th=[ 2024], 00:15:15.286 | 30.00th=[ 2040], 40.00th=[ 2057], 50.00th=[ 2073], 60.00th=[ 2089], 00:15:15.286 | 70.00th=[ 2114], 80.00th=[ 2114], 90.00th=[ 2180], 95.00th=[ 2737], 00:15:15.286 | 99.00th=[ 4621], 99.50th=[ 5669], 99.90th=[ 6652], 99.95th=[ 7504], 00:15:15.286 | 99.99th=[12911] 00:15:15.286 bw ( KiB/s): min=13824, max=123392, per=100.00%, avg=118677.33, stdev=14237.49, samples=108 00:15:15.286 iops : min= 3456, max=30848, avg=29669.33, stdev=3559.37, samples=108 00:15:15.286 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:15.286 lat (msec) : 2=36.55%, 4=61.25%, 10=2.16%, 20=0.02%, >=2000=0.01% 00:15:15.286 cpu : usr=6.15%, sys=27.23%, ctx=105994, majf=0, minf=15 00:15:15.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:15.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.286 issued rwts: total=1617549,1616369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.286 00:15:15.286 Run status group 0 (all jobs): 00:15:15.286 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=6319MiB (6625MB), run=60002-60002msec 00:15:15.286 WRITE: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=6314MiB (6621MB), run=60002-60002msec 00:15:15.286 00:15:15.286 Disk stats (read/write): 00:15:15.286 ublkb1: ios=1614306/1612979, merge=0/0, ticks=3658100/3685412, in_queue=7343513, util=99.86% 00:15:15.286 14:51:53 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.286 [2024-11-17 14:51:53.686365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:15.286 [2024-11-17 14:51:53.725048] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:15.286 [2024-11-17 14:51:53.725188] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:15.286 [2024-11-17 14:51:53.732945] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:15.286 [2024-11-17 14:51:53.733039] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:15.286 [2024-11-17 14:51:53.733047] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.286 14:51:53 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.286 [2024-11-17 14:51:53.749022] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:15.286 [2024-11-17 14:51:53.756939] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:15.286 [2024-11-17 14:51:53.756972] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.286 14:51:53 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:15.286 14:51:53 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:15.286 14:51:53 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71310 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 71310 ']' 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 71310 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71310 00:15:15.286 killing process with pid 71310 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71310' 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@973 -- # kill 71310 00:15:15.286 14:51:53 ublk_recovery -- common/autotest_common.sh@978 -- # wait 71310 00:15:15.286 [2024-11-17 14:51:54.888095] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:15.286 [2024-11-17 14:51:54.888141] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:15.286 00:15:15.286 real 1m4.573s 00:15:15.286 user 1m44.361s 00:15:15.286 sys 0m33.996s 00:15:15.286 14:51:55 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.286 ************************************ 00:15:15.286 END TEST ublk_recovery 00:15:15.287 ************************************ 00:15:15.287 14:51:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.287 14:51:55 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:15:15.287 14:51:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:15.287 14:51:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:15.287 14:51:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:15.287 14:51:55 -- common/autotest_common.sh@10 -- # set +x 00:15:15.287 14:51:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:15.287 14:51:55 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:15:15.287 14:51:55 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:15:15.287 14:51:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:15.287 14:51:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:15.287 14:51:55 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:15.287 14:51:55 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:15:15.287 14:51:55 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:15:15.287 14:51:55 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:15:15.287 14:51:55 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:15:15.287 14:51:55 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:15.287 14:51:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:15.287 14:51:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.287 14:51:55 -- common/autotest_common.sh@10 -- # set +x 00:15:15.287 ************************************ 00:15:15.287 START TEST ftl 00:15:15.287 ************************************ 00:15:15.287 14:51:55 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:15.287 * Looking for test storage... 00:15:15.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:15.287 14:51:56 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:15.287 14:51:56 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:15.287 14:51:56 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:15.287 14:51:56 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.287 14:51:56 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:15.287 14:51:56 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:15.287 14:51:56 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:15.287 14:51:56 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:15.287 14:51:56 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:15.287 14:51:56 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:15.287 14:51:56 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:15.287 14:51:56 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:15.287 14:51:56 ftl -- scripts/common.sh@345 -- # : 1 00:15:15.287 14:51:56 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:15.287 14:51:56 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.287 14:51:56 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:15.287 14:51:56 ftl -- scripts/common.sh@353 -- # local d=1 00:15:15.287 14:51:56 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.287 14:51:56 ftl -- scripts/common.sh@355 -- # echo 1 00:15:15.287 14:51:56 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:15.287 14:51:56 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:15.287 14:51:56 ftl -- scripts/common.sh@353 -- # local d=2 00:15:15.287 14:51:56 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.287 14:51:56 ftl -- scripts/common.sh@355 -- # echo 2 00:15:15.287 14:51:56 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:15.287 14:51:56 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:15.287 14:51:56 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:15.287 14:51:56 ftl -- scripts/common.sh@368 -- # return 0 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:15.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.287 --rc genhtml_branch_coverage=1 00:15:15.287 --rc genhtml_function_coverage=1 00:15:15.287 --rc genhtml_legend=1 00:15:15.287 --rc geninfo_all_blocks=1 00:15:15.287 --rc geninfo_unexecuted_blocks=1 00:15:15.287 00:15:15.287 ' 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:15.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.287 --rc genhtml_branch_coverage=1 00:15:15.287 --rc genhtml_function_coverage=1 00:15:15.287 --rc genhtml_legend=1 00:15:15.287 --rc geninfo_all_blocks=1 00:15:15.287 --rc geninfo_unexecuted_blocks=1 00:15:15.287 00:15:15.287 ' 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:15.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.287 --rc genhtml_branch_coverage=1 00:15:15.287 --rc genhtml_function_coverage=1 00:15:15.287 --rc genhtml_legend=1 00:15:15.287 --rc geninfo_all_blocks=1 00:15:15.287 --rc geninfo_unexecuted_blocks=1 00:15:15.287 00:15:15.287 ' 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:15.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.287 --rc genhtml_branch_coverage=1 00:15:15.287 --rc genhtml_function_coverage=1 00:15:15.287 --rc genhtml_legend=1 00:15:15.287 --rc geninfo_all_blocks=1 00:15:15.287 --rc geninfo_unexecuted_blocks=1 00:15:15.287 00:15:15.287 ' 00:15:15.287 14:51:56 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:15.287 14:51:56 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:15.287 14:51:56 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:15.287 14:51:56 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:15.287 14:51:56 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:15.287 14:51:56 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:15.287 14:51:56 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:15.287 14:51:56 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:15.287 14:51:56 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:15.287 14:51:56 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.287 14:51:56 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.287 14:51:56 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:15.287 14:51:56 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:15.287 14:51:56 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:15.287 14:51:56 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:15.287 14:51:56 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:15.287 14:51:56 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:15.287 14:51:56 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.287 14:51:56 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.287 14:51:56 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:15.287 14:51:56 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:15.287 14:51:56 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:15.287 14:51:56 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:15.287 14:51:56 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:15.287 14:51:56 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:15.287 14:51:56 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:15.287 14:51:56 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:15.287 14:51:56 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:15.287 14:51:56 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:15.287 14:51:56 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:15.287 14:51:56 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:15.287 14:51:56 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:15.287 14:51:56 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:15.287 14:51:56 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:15.287 14:51:56 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:15.287 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:15.287 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:15.287 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:15.287 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:15.287 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:15.287 14:51:56 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72121 00:15:15.287 14:51:56 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72121 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@835 -- # '[' -z 72121 ']' 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.287 14:51:56 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.287 14:51:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:15.287 [2024-11-17 14:51:56.688026] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:15.287 [2024-11-17 14:51:56.688157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72121 ] 00:15:15.287 [2024-11-17 14:51:56.851107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.287 [2024-11-17 14:51:56.971584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.287 14:51:57 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.287 14:51:57 ftl -- common/autotest_common.sh@868 -- # return 0 00:15:15.287 14:51:57 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:15.287 14:51:57 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:15.287 14:51:58 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:15.287 14:51:58 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@50 -- # break 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@63 -- # break 00:15:15.288 14:51:59 ftl -- ftl/ftl.sh@66 -- # killprocess 72121 00:15:15.288 14:51:59 ftl -- common/autotest_common.sh@954 -- # '[' -z 72121 ']' 00:15:15.288 14:51:59 ftl -- common/autotest_common.sh@958 -- # kill -0 72121 00:15:15.288 14:51:59 ftl -- common/autotest_common.sh@959 -- # uname 00:15:15.288 14:51:59 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.288 14:51:59 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72121 00:15:15.288 14:51:59 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.288 killing process with pid 72121 00:15:15.288 14:51:59 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.288 14:51:59 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72121' 00:15:15.288 14:51:59 ftl -- common/autotest_common.sh@973 -- # kill 72121 00:15:15.288 14:51:59 ftl -- common/autotest_common.sh@978 -- # wait 72121 00:15:15.288 14:52:00 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:15.288 14:52:00 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:15.288 14:52:00 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:15.288 14:52:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.288 14:52:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:15.549 ************************************ 00:15:15.549 START TEST ftl_fio_basic 00:15:15.549 ************************************ 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:15.549 * Looking for test storage... 00:15:15.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:15.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.549 --rc genhtml_branch_coverage=1 00:15:15.549 --rc genhtml_function_coverage=1 00:15:15.549 --rc genhtml_legend=1 00:15:15.549 --rc geninfo_all_blocks=1 00:15:15.549 --rc geninfo_unexecuted_blocks=1 00:15:15.549 00:15:15.549 ' 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:15.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.549 --rc genhtml_branch_coverage=1 00:15:15.549 --rc genhtml_function_coverage=1 00:15:15.549 --rc genhtml_legend=1 00:15:15.549 --rc geninfo_all_blocks=1 00:15:15.549 --rc geninfo_unexecuted_blocks=1 00:15:15.549 00:15:15.549 ' 00:15:15.549 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:15.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.549 --rc genhtml_branch_coverage=1 00:15:15.549 --rc genhtml_function_coverage=1 00:15:15.549 --rc genhtml_legend=1 00:15:15.549 --rc geninfo_all_blocks=1 00:15:15.549 --rc geninfo_unexecuted_blocks=1 00:15:15.549 00:15:15.549 ' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:15.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.550 --rc genhtml_branch_coverage=1 00:15:15.550 --rc genhtml_function_coverage=1 00:15:15.550 --rc genhtml_legend=1 00:15:15.550 --rc geninfo_all_blocks=1 00:15:15.550 --rc geninfo_unexecuted_blocks=1 00:15:15.550 00:15:15.550 ' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72256 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72256 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 72256 ']' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.550 14:52:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:15.550 [2024-11-17 14:52:01.083781] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:15.550 [2024-11-17 14:52:01.084465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72256 ] 00:15:15.810 [2024-11-17 14:52:01.245685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:15.810 [2024-11-17 14:52:01.335383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.810 [2024-11-17 14:52:01.335682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.810 [2024-11-17 14:52:01.335711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.381 14:52:01 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.381 14:52:01 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:15:16.381 14:52:01 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:16.381 14:52:01 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:16.381 14:52:01 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:16.381 14:52:01 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:16.381 14:52:01 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:16.381 14:52:01 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:16.642 14:52:02 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:16.642 14:52:02 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:16.642 14:52:02 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:16.642 14:52:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:16.642 14:52:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:16.642 14:52:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:16.642 14:52:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:16.642 14:52:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:16.903 14:52:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:16.903 { 00:15:16.903 "name": "nvme0n1", 00:15:16.903 "aliases": [ 00:15:16.903 "f3d1ef08-8211-405b-84ce-a38b8dac1193" 00:15:16.903 ], 00:15:16.903 "product_name": "NVMe disk", 00:15:16.903 "block_size": 4096, 00:15:16.903 "num_blocks": 1310720, 00:15:16.903 "uuid": "f3d1ef08-8211-405b-84ce-a38b8dac1193", 00:15:16.903 "numa_id": -1, 00:15:16.903 "assigned_rate_limits": { 00:15:16.903 "rw_ios_per_sec": 0, 00:15:16.903 "rw_mbytes_per_sec": 0, 00:15:16.903 "r_mbytes_per_sec": 0, 00:15:16.903 "w_mbytes_per_sec": 0 00:15:16.903 }, 00:15:16.903 "claimed": false, 00:15:16.903 "zoned": false, 00:15:16.903 "supported_io_types": { 00:15:16.903 "read": true, 00:15:16.903 "write": true, 00:15:16.903 "unmap": true, 00:15:16.903 "flush": true, 00:15:16.903 "reset": true, 00:15:16.903 "nvme_admin": true, 00:15:16.903 "nvme_io": true, 00:15:16.903 "nvme_io_md": false, 00:15:16.903 "write_zeroes": true, 00:15:16.903 "zcopy": false, 00:15:16.904 "get_zone_info": false, 00:15:16.904 "zone_management": false, 00:15:16.904 "zone_append": false, 00:15:16.904 "compare": true, 00:15:16.904 "compare_and_write": false, 00:15:16.904 "abort": true, 00:15:16.904 "seek_hole": false, 00:15:16.904 "seek_data": false, 00:15:16.904 "copy": true, 00:15:16.904 "nvme_iov_md": false 00:15:16.904 }, 00:15:16.904 "driver_specific": { 00:15:16.904 "nvme": [ 00:15:16.904 { 00:15:16.904 "pci_address": "0000:00:11.0", 00:15:16.904 "trid": { 00:15:16.904 "trtype": "PCIe", 00:15:16.904 "traddr": "0000:00:11.0" 00:15:16.904 }, 00:15:16.904 "ctrlr_data": { 00:15:16.904 "cntlid": 0, 00:15:16.904 "vendor_id": "0x1b36", 00:15:16.904 "model_number": "QEMU NVMe Ctrl", 00:15:16.904 "serial_number": "12341", 00:15:16.904 "firmware_revision": "8.0.0", 00:15:16.904 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:16.904 "oacs": { 00:15:16.904 "security": 0, 00:15:16.904 "format": 1, 00:15:16.904 "firmware": 0, 00:15:16.904 "ns_manage": 1 00:15:16.904 }, 00:15:16.904 "multi_ctrlr": false, 00:15:16.904 "ana_reporting": false 00:15:16.904 }, 00:15:16.904 "vs": { 00:15:16.904 "nvme_version": "1.4" 00:15:16.904 }, 00:15:16.904 "ns_data": { 00:15:16.904 "id": 1, 00:15:16.904 "can_share": false 00:15:16.904 } 00:15:16.904 } 00:15:16.904 ], 00:15:16.904 "mp_policy": "active_passive" 00:15:16.904 } 00:15:16.904 } 00:15:16.904 ]' 00:15:16.904 14:52:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:16.904 14:52:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:16.904 14:52:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:16.904 14:52:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:16.904 14:52:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:16.904 14:52:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:15:16.904 14:52:02 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:16.904 14:52:02 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:16.904 14:52:02 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:16.904 14:52:02 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:16.904 14:52:02 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:17.165 14:52:02 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:17.165 14:52:02 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:17.427 14:52:02 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=0f6d0814-0116-4f6f-a63e-68882680cc5b 00:15:17.427 14:52:02 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0f6d0814-0116-4f6f-a63e-68882680cc5b 00:15:17.687 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=8f887826-577e-4d64-acf6-1b888d57ab37 00:15:17.687 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8f887826-577e-4d64-acf6-1b888d57ab37 00:15:17.687 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:17.687 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:17.687 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=8f887826-577e-4d64-acf6-1b888d57ab37 00:15:17.687 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:17.687 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 8f887826-577e-4d64-acf6-1b888d57ab37 00:15:17.687 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=8f887826-577e-4d64-acf6-1b888d57ab37 00:15:17.687 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:17.687 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:17.687 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:17.687 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f887826-577e-4d64-acf6-1b888d57ab37 00:15:17.948 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:17.948 { 00:15:17.948 "name": "8f887826-577e-4d64-acf6-1b888d57ab37", 00:15:17.948 "aliases": [ 00:15:17.948 "lvs/nvme0n1p0" 00:15:17.948 ], 00:15:17.948 "product_name": "Logical Volume", 00:15:17.948 "block_size": 4096, 00:15:17.948 "num_blocks": 26476544, 00:15:17.948 "uuid": "8f887826-577e-4d64-acf6-1b888d57ab37", 00:15:17.948 "assigned_rate_limits": { 00:15:17.948 "rw_ios_per_sec": 0, 00:15:17.948 "rw_mbytes_per_sec": 0, 00:15:17.948 "r_mbytes_per_sec": 0, 00:15:17.948 "w_mbytes_per_sec": 0 00:15:17.948 }, 00:15:17.948 "claimed": false, 00:15:17.948 "zoned": false, 00:15:17.948 "supported_io_types": { 00:15:17.948 "read": true, 00:15:17.948 "write": true, 00:15:17.948 "unmap": true, 00:15:17.948 "flush": false, 00:15:17.948 "reset": true, 00:15:17.948 "nvme_admin": false, 00:15:17.948 "nvme_io": false, 00:15:17.948 "nvme_io_md": false, 00:15:17.948 "write_zeroes": true, 00:15:17.948 "zcopy": false, 00:15:17.948 "get_zone_info": false, 00:15:17.948 "zone_management": false, 00:15:17.948 "zone_append": false, 00:15:17.948 "compare": false, 00:15:17.948 "compare_and_write": false, 00:15:17.948 "abort": false, 00:15:17.948 "seek_hole": true, 00:15:17.948 "seek_data": true, 00:15:17.948 "copy": false, 00:15:17.948 "nvme_iov_md": false 00:15:17.948 }, 00:15:17.948 "driver_specific": { 00:15:17.948 "lvol": { 00:15:17.948 "lvol_store_uuid": "0f6d0814-0116-4f6f-a63e-68882680cc5b", 00:15:17.948 "base_bdev": "nvme0n1", 00:15:17.948 "thin_provision": true, 00:15:17.948 "num_allocated_clusters": 0, 00:15:17.948 "snapshot": false, 00:15:17.948 "clone": false, 00:15:17.948 "esnap_clone": false 00:15:17.948 } 00:15:17.948 } 00:15:17.948 } 00:15:17.948 ]' 00:15:17.948 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:17.948 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:17.948 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:17.948 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:17.948 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:17.948 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:17.948 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:17.948 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:17.948 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:18.208 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:18.208 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:18.208 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 8f887826-577e-4d64-acf6-1b888d57ab37 00:15:18.208 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=8f887826-577e-4d64-acf6-1b888d57ab37 00:15:18.208 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:18.208 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:18.208 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:18.208 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f887826-577e-4d64-acf6-1b888d57ab37 00:15:18.469 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:18.469 { 00:15:18.469 "name": "8f887826-577e-4d64-acf6-1b888d57ab37", 00:15:18.469 "aliases": [ 00:15:18.469 "lvs/nvme0n1p0" 00:15:18.469 ], 00:15:18.469 "product_name": "Logical Volume", 00:15:18.469 "block_size": 4096, 00:15:18.469 "num_blocks": 26476544, 00:15:18.469 "uuid": "8f887826-577e-4d64-acf6-1b888d57ab37", 00:15:18.469 "assigned_rate_limits": { 00:15:18.469 "rw_ios_per_sec": 0, 00:15:18.469 "rw_mbytes_per_sec": 0, 00:15:18.469 "r_mbytes_per_sec": 0, 00:15:18.469 "w_mbytes_per_sec": 0 00:15:18.469 }, 00:15:18.469 "claimed": false, 00:15:18.469 "zoned": false, 00:15:18.469 "supported_io_types": { 00:15:18.469 "read": true, 00:15:18.469 "write": true, 00:15:18.469 "unmap": true, 00:15:18.469 "flush": false, 00:15:18.469 "reset": true, 00:15:18.469 "nvme_admin": false, 00:15:18.469 "nvme_io": false, 00:15:18.469 "nvme_io_md": false, 00:15:18.469 "write_zeroes": true, 00:15:18.469 "zcopy": false, 00:15:18.469 "get_zone_info": false, 00:15:18.469 "zone_management": false, 00:15:18.469 "zone_append": false, 00:15:18.469 "compare": false, 00:15:18.469 "compare_and_write": false, 00:15:18.469 "abort": false, 00:15:18.469 "seek_hole": true, 00:15:18.469 "seek_data": true, 00:15:18.469 "copy": false, 00:15:18.469 "nvme_iov_md": false 00:15:18.469 }, 00:15:18.469 "driver_specific": { 00:15:18.469 "lvol": { 00:15:18.469 "lvol_store_uuid": "0f6d0814-0116-4f6f-a63e-68882680cc5b", 00:15:18.469 "base_bdev": "nvme0n1", 00:15:18.469 "thin_provision": true, 00:15:18.469 "num_allocated_clusters": 0, 00:15:18.469 "snapshot": false, 00:15:18.469 "clone": false, 00:15:18.469 "esnap_clone": false 00:15:18.469 } 00:15:18.469 } 00:15:18.469 } 00:15:18.469 ]' 00:15:18.469 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:18.470 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:18.470 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:18.470 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:18.470 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:18.470 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:18.470 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:18.470 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:18.729 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 8f887826-577e-4d64-acf6-1b888d57ab37 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=8f887826-577e-4d64-acf6-1b888d57ab37 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f887826-577e-4d64-acf6-1b888d57ab37 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:18.729 { 00:15:18.729 "name": "8f887826-577e-4d64-acf6-1b888d57ab37", 00:15:18.729 "aliases": [ 00:15:18.729 "lvs/nvme0n1p0" 00:15:18.729 ], 00:15:18.729 "product_name": "Logical Volume", 00:15:18.729 "block_size": 4096, 00:15:18.729 "num_blocks": 26476544, 00:15:18.729 "uuid": "8f887826-577e-4d64-acf6-1b888d57ab37", 00:15:18.729 "assigned_rate_limits": { 00:15:18.729 "rw_ios_per_sec": 0, 00:15:18.729 "rw_mbytes_per_sec": 0, 00:15:18.729 "r_mbytes_per_sec": 0, 00:15:18.729 "w_mbytes_per_sec": 0 00:15:18.729 }, 00:15:18.729 "claimed": false, 00:15:18.729 "zoned": false, 00:15:18.729 "supported_io_types": { 00:15:18.729 "read": true, 00:15:18.729 "write": true, 00:15:18.729 "unmap": true, 00:15:18.729 "flush": false, 00:15:18.729 "reset": true, 00:15:18.729 "nvme_admin": false, 00:15:18.729 "nvme_io": false, 00:15:18.729 "nvme_io_md": false, 00:15:18.729 "write_zeroes": true, 00:15:18.729 "zcopy": false, 00:15:18.729 "get_zone_info": false, 00:15:18.729 "zone_management": false, 00:15:18.729 "zone_append": false, 00:15:18.729 "compare": false, 00:15:18.729 "compare_and_write": false, 00:15:18.729 "abort": false, 00:15:18.729 "seek_hole": true, 00:15:18.729 "seek_data": true, 00:15:18.729 "copy": false, 00:15:18.729 "nvme_iov_md": false 00:15:18.729 }, 00:15:18.729 "driver_specific": { 00:15:18.729 "lvol": { 00:15:18.729 "lvol_store_uuid": "0f6d0814-0116-4f6f-a63e-68882680cc5b", 00:15:18.729 "base_bdev": "nvme0n1", 00:15:18.729 "thin_provision": true, 00:15:18.729 "num_allocated_clusters": 0, 00:15:18.729 "snapshot": false, 00:15:18.729 "clone": false, 00:15:18.729 "esnap_clone": false 00:15:18.729 } 00:15:18.729 } 00:15:18.729 } 00:15:18.729 ]' 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:18.729 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:18.989 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:18.989 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:18.989 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:18.989 14:52:04 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:18.989 14:52:04 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:18.989 14:52:04 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8f887826-577e-4d64-acf6-1b888d57ab37 -c nvc0n1p0 --l2p_dram_limit 60 00:15:18.989 [2024-11-17 14:52:04.453678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.989 [2024-11-17 14:52:04.453718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:18.989 [2024-11-17 14:52:04.453731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:18.989 [2024-11-17 14:52:04.453738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.989 [2024-11-17 14:52:04.453797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.989 [2024-11-17 14:52:04.453806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:18.989 [2024-11-17 14:52:04.453814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:15:18.989 [2024-11-17 14:52:04.453820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.989 [2024-11-17 14:52:04.453856] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:18.989 [2024-11-17 14:52:04.454420] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:18.989 [2024-11-17 14:52:04.454444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.990 [2024-11-17 14:52:04.454451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:18.990 [2024-11-17 14:52:04.454459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:15:18.990 [2024-11-17 14:52:04.454465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.990 [2024-11-17 14:52:04.454524] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID df380146-7653-4108-884e-90482c642435 00:15:18.990 [2024-11-17 14:52:04.455547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.990 [2024-11-17 14:52:04.455653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:18.990 [2024-11-17 14:52:04.455666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:18.990 [2024-11-17 14:52:04.455674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.990 [2024-11-17 14:52:04.460971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.990 [2024-11-17 14:52:04.461057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:18.990 [2024-11-17 14:52:04.461100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.233 ms 00:15:18.990 [2024-11-17 14:52:04.461120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.990 [2024-11-17 14:52:04.461214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.990 [2024-11-17 14:52:04.461258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:18.990 [2024-11-17 14:52:04.461275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:18.990 [2024-11-17 14:52:04.461328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.990 [2024-11-17 14:52:04.461397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.990 [2024-11-17 14:52:04.461449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:18.990 [2024-11-17 14:52:04.461467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:18.990 [2024-11-17 14:52:04.461506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.990 [2024-11-17 14:52:04.461543] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:18.990 [2024-11-17 14:52:04.464467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.990 [2024-11-17 14:52:04.464553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:18.990 [2024-11-17 14:52:04.464599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.928 ms 00:15:18.990 [2024-11-17 14:52:04.464618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.990 [2024-11-17 14:52:04.464664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.990 [2024-11-17 14:52:04.464726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:18.990 [2024-11-17 14:52:04.464747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:18.990 [2024-11-17 14:52:04.464762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.990 [2024-11-17 14:52:04.464817] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:18.990 [2024-11-17 14:52:04.464957] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:18.990 [2024-11-17 14:52:04.465020] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:18.990 [2024-11-17 14:52:04.465047] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:18.990 [2024-11-17 14:52:04.465073] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:18.990 [2024-11-17 14:52:04.465124] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:18.990 [2024-11-17 14:52:04.465151] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:18.990 [2024-11-17 14:52:04.465165] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:18.990 [2024-11-17 14:52:04.465181] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:18.990 [2024-11-17 14:52:04.465195] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:18.990 [2024-11-17 14:52:04.465240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.990 [2024-11-17 14:52:04.465283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:18.990 [2024-11-17 14:52:04.465319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:15:18.990 [2024-11-17 14:52:04.465337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.990 [2024-11-17 14:52:04.465426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.990 [2024-11-17 14:52:04.465443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:18.990 [2024-11-17 14:52:04.465459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:18.990 [2024-11-17 14:52:04.465493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.990 [2024-11-17 14:52:04.465611] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:18.990 [2024-11-17 14:52:04.465670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:18.990 [2024-11-17 14:52:04.465693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:18.990 [2024-11-17 14:52:04.465709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.990 [2024-11-17 14:52:04.465725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:18.990 [2024-11-17 14:52:04.465739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:18.990 [2024-11-17 14:52:04.465755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:18.990 [2024-11-17 14:52:04.465848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:18.990 [2024-11-17 14:52:04.465867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:18.990 [2024-11-17 14:52:04.465881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:18.990 [2024-11-17 14:52:04.465897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:18.990 [2024-11-17 14:52:04.465912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:18.990 [2024-11-17 14:52:04.466013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:18.990 [2024-11-17 14:52:04.466031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:18.990 [2024-11-17 14:52:04.466047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:18.990 [2024-11-17 14:52:04.466061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.990 [2024-11-17 14:52:04.466079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:18.990 [2024-11-17 14:52:04.466094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:18.990 [2024-11-17 14:52:04.466139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.990 [2024-11-17 14:52:04.466155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:18.990 [2024-11-17 14:52:04.466171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:18.990 [2024-11-17 14:52:04.466185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:18.990 [2024-11-17 14:52:04.466201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:18.990 [2024-11-17 14:52:04.466215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:18.990 [2024-11-17 14:52:04.466230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:18.990 [2024-11-17 14:52:04.466267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:18.990 [2024-11-17 14:52:04.466285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:18.990 [2024-11-17 14:52:04.466299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:18.990 [2024-11-17 14:52:04.466315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:18.990 [2024-11-17 14:52:04.466329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:18.990 [2024-11-17 14:52:04.466344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:18.990 [2024-11-17 14:52:04.466359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:18.990 [2024-11-17 14:52:04.466401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:18.990 [2024-11-17 14:52:04.466418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:18.990 [2024-11-17 14:52:04.466533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:18.990 [2024-11-17 14:52:04.466601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:18.990 [2024-11-17 14:52:04.466620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:18.990 [2024-11-17 14:52:04.466634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:18.990 [2024-11-17 14:52:04.466649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:18.990 [2024-11-17 14:52:04.466667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.990 [2024-11-17 14:52:04.466683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:18.990 [2024-11-17 14:52:04.466764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:18.990 [2024-11-17 14:52:04.466784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.990 [2024-11-17 14:52:04.466798] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:18.990 [2024-11-17 14:52:04.466815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:18.990 [2024-11-17 14:52:04.466830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:18.990 [2024-11-17 14:52:04.466846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.990 [2024-11-17 14:52:04.466935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:18.990 [2024-11-17 14:52:04.466957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:18.990 [2024-11-17 14:52:04.466972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:18.990 [2024-11-17 14:52:04.466987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:18.990 [2024-11-17 14:52:04.467002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:18.990 [2024-11-17 14:52:04.467018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:18.990 [2024-11-17 14:52:04.467035] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:18.990 [2024-11-17 14:52:04.467122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:18.990 [2024-11-17 14:52:04.467146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:18.991 [2024-11-17 14:52:04.467169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:18.991 [2024-11-17 14:52:04.467191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:18.991 [2024-11-17 14:52:04.467278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:18.991 [2024-11-17 14:52:04.467301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:18.991 [2024-11-17 14:52:04.467324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:18.991 [2024-11-17 14:52:04.467376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:18.991 [2024-11-17 14:52:04.467545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:18.991 [2024-11-17 14:52:04.467569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:18.991 [2024-11-17 14:52:04.467612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:18.991 [2024-11-17 14:52:04.467635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:18.991 [2024-11-17 14:52:04.467710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:18.991 [2024-11-17 14:52:04.467732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:18.991 [2024-11-17 14:52:04.467755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:18.991 [2024-11-17 14:52:04.467777] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:18.991 [2024-11-17 14:52:04.467801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:18.991 [2024-11-17 14:52:04.467893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:18.991 [2024-11-17 14:52:04.467916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:18.991 [2024-11-17 14:52:04.467948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:18.991 [2024-11-17 14:52:04.467971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:18.991 [2024-11-17 14:52:04.467994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.991 [2024-11-17 14:52:04.468039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:18.991 [2024-11-17 14:52:04.468058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:15:18.991 [2024-11-17 14:52:04.468073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.991 [2024-11-17 14:52:04.468139] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:18.991 [2024-11-17 14:52:04.468192] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:21.520 [2024-11-17 14:52:06.639526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.639723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:21.520 [2024-11-17 14:52:06.639784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2171.376 ms 00:15:21.520 [2024-11-17 14:52:06.639806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.661011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.661148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:21.520 [2024-11-17 14:52:06.661195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.011 ms 00:15:21.520 [2024-11-17 14:52:06.661216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.661334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.661357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:21.520 [2024-11-17 14:52:06.661374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:21.520 [2024-11-17 14:52:06.661392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.700405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.700580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:21.520 [2024-11-17 14:52:06.700725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.944 ms 00:15:21.520 [2024-11-17 14:52:06.700760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.700814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.700828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:21.520 [2024-11-17 14:52:06.700839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:21.520 [2024-11-17 14:52:06.700851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.701257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.701280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:21.520 [2024-11-17 14:52:06.701291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:15:21.520 [2024-11-17 14:52:06.701306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.701457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.701470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:21.520 [2024-11-17 14:52:06.701481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:15:21.520 [2024-11-17 14:52:06.701494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.717788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.717822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:21.520 [2024-11-17 14:52:06.717834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.257 ms 00:15:21.520 [2024-11-17 14:52:06.717844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.729118] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:21.520 [2024-11-17 14:52:06.743406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.743545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:21.520 [2024-11-17 14:52:06.743564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.448 ms 00:15:21.520 [2024-11-17 14:52:06.743573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.786745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.786873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:21.520 [2024-11-17 14:52:06.786895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.136 ms 00:15:21.520 [2024-11-17 14:52:06.786903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.787101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.787112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:21.520 [2024-11-17 14:52:06.787125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:15:21.520 [2024-11-17 14:52:06.787132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.810141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.810174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:21.520 [2024-11-17 14:52:06.810186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.949 ms 00:15:21.520 [2024-11-17 14:52:06.810194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.832654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.832682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:21.520 [2024-11-17 14:52:06.832694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.405 ms 00:15:21.520 [2024-11-17 14:52:06.832701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.833281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.833325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:21.520 [2024-11-17 14:52:06.833337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:15:21.520 [2024-11-17 14:52:06.833345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.896675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.896794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:21.520 [2024-11-17 14:52:06.896849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.282 ms 00:15:21.520 [2024-11-17 14:52:06.896874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.920727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.920830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:21.520 [2024-11-17 14:52:06.920886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.721 ms 00:15:21.520 [2024-11-17 14:52:06.920909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.943350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.943472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:21.520 [2024-11-17 14:52:06.943572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.381 ms 00:15:21.520 [2024-11-17 14:52:06.943595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.966446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.966548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:21.520 [2024-11-17 14:52:06.966600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.773 ms 00:15:21.520 [2024-11-17 14:52:06.966622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.966678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.966815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:21.520 [2024-11-17 14:52:06.966844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:21.520 [2024-11-17 14:52:06.966866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.966972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.520 [2024-11-17 14:52:06.966999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:21.520 [2024-11-17 14:52:06.967020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:21.520 [2024-11-17 14:52:06.967040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.520 [2024-11-17 14:52:06.967974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2513.869 ms, result 0 00:15:21.520 { 00:15:21.520 "name": "ftl0", 00:15:21.520 "uuid": "df380146-7653-4108-884e-90482c642435" 00:15:21.520 } 00:15:21.520 14:52:06 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:21.520 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:15:21.520 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.520 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:15:21.520 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.520 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.520 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:21.779 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:22.038 [ 00:15:22.038 { 00:15:22.038 "name": "ftl0", 00:15:22.038 "aliases": [ 00:15:22.038 "df380146-7653-4108-884e-90482c642435" 00:15:22.038 ], 00:15:22.038 "product_name": "FTL disk", 00:15:22.038 "block_size": 4096, 00:15:22.038 "num_blocks": 20971520, 00:15:22.038 "uuid": "df380146-7653-4108-884e-90482c642435", 00:15:22.038 "assigned_rate_limits": { 00:15:22.038 "rw_ios_per_sec": 0, 00:15:22.038 "rw_mbytes_per_sec": 0, 00:15:22.038 "r_mbytes_per_sec": 0, 00:15:22.038 "w_mbytes_per_sec": 0 00:15:22.038 }, 00:15:22.038 "claimed": false, 00:15:22.038 "zoned": false, 00:15:22.038 "supported_io_types": { 00:15:22.038 "read": true, 00:15:22.038 "write": true, 00:15:22.038 "unmap": true, 00:15:22.039 "flush": true, 00:15:22.039 "reset": false, 00:15:22.039 "nvme_admin": false, 00:15:22.039 "nvme_io": false, 00:15:22.039 "nvme_io_md": false, 00:15:22.039 "write_zeroes": true, 00:15:22.039 "zcopy": false, 00:15:22.039 "get_zone_info": false, 00:15:22.039 "zone_management": false, 00:15:22.039 "zone_append": false, 00:15:22.039 "compare": false, 00:15:22.039 "compare_and_write": false, 00:15:22.039 "abort": false, 00:15:22.039 "seek_hole": false, 00:15:22.039 "seek_data": false, 00:15:22.039 "copy": false, 00:15:22.039 "nvme_iov_md": false 00:15:22.039 }, 00:15:22.039 "driver_specific": { 00:15:22.039 "ftl": { 00:15:22.039 "base_bdev": "8f887826-577e-4d64-acf6-1b888d57ab37", 00:15:22.039 "cache": "nvc0n1p0" 00:15:22.039 } 00:15:22.039 } 00:15:22.039 } 00:15:22.039 ] 00:15:22.039 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:15:22.039 14:52:07 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:22.039 14:52:07 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:22.039 14:52:07 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:22.311 14:52:07 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:22.312 [2024-11-17 14:52:07.764946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.312 [2024-11-17 14:52:07.765078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:22.312 [2024-11-17 14:52:07.765095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:22.312 [2024-11-17 14:52:07.765103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.312 [2024-11-17 14:52:07.765144] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:22.312 [2024-11-17 14:52:07.767289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.312 [2024-11-17 14:52:07.767312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:22.312 [2024-11-17 14:52:07.767321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:15:22.312 [2024-11-17 14:52:07.767328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.312 [2024-11-17 14:52:07.767767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.312 [2024-11-17 14:52:07.767782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:22.312 [2024-11-17 14:52:07.767791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:15:22.312 [2024-11-17 14:52:07.767797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.312 [2024-11-17 14:52:07.770231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.312 [2024-11-17 14:52:07.770247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:22.312 [2024-11-17 14:52:07.770256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.414 ms 00:15:22.312 [2024-11-17 14:52:07.770263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.312 [2024-11-17 14:52:07.774990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.312 [2024-11-17 14:52:07.775010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:22.312 [2024-11-17 14:52:07.775019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:15:22.313 [2024-11-17 14:52:07.775025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.313 [2024-11-17 14:52:07.793826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.313 [2024-11-17 14:52:07.793851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:22.313 [2024-11-17 14:52:07.793861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.724 ms 00:15:22.313 [2024-11-17 14:52:07.793866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.313 [2024-11-17 14:52:07.805521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.313 [2024-11-17 14:52:07.805548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:22.313 [2024-11-17 14:52:07.805559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.604 ms 00:15:22.313 [2024-11-17 14:52:07.805567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.313 [2024-11-17 14:52:07.805727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.313 [2024-11-17 14:52:07.805735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:22.313 [2024-11-17 14:52:07.805743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:15:22.313 [2024-11-17 14:52:07.805749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.313 [2024-11-17 14:52:07.823647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.313 [2024-11-17 14:52:07.823672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:22.313 [2024-11-17 14:52:07.823681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.871 ms 00:15:22.313 [2024-11-17 14:52:07.823686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.313 [2024-11-17 14:52:07.841001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.313 [2024-11-17 14:52:07.841026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:22.313 [2024-11-17 14:52:07.841035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.272 ms 00:15:22.313 [2024-11-17 14:52:07.841040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.576 [2024-11-17 14:52:07.858087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.576 [2024-11-17 14:52:07.858113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:22.576 [2024-11-17 14:52:07.858121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.012 ms 00:15:22.576 [2024-11-17 14:52:07.858127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.576 [2024-11-17 14:52:07.875195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.576 [2024-11-17 14:52:07.875220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:22.576 [2024-11-17 14:52:07.875228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.984 ms 00:15:22.576 [2024-11-17 14:52:07.875234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.576 [2024-11-17 14:52:07.875268] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:22.576 [2024-11-17 14:52:07.875278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:22.576 [2024-11-17 14:52:07.875489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:22.577 [2024-11-17 14:52:07.875960] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:22.577 [2024-11-17 14:52:07.875967] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: df380146-7653-4108-884e-90482c642435 00:15:22.577 [2024-11-17 14:52:07.875973] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:22.577 [2024-11-17 14:52:07.875991] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:22.577 [2024-11-17 14:52:07.875996] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:22.577 [2024-11-17 14:52:07.876005] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:22.577 [2024-11-17 14:52:07.876010] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:22.577 [2024-11-17 14:52:07.876017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:22.577 [2024-11-17 14:52:07.876022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:22.577 [2024-11-17 14:52:07.876028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:22.577 [2024-11-17 14:52:07.876033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:22.577 [2024-11-17 14:52:07.876040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.578 [2024-11-17 14:52:07.876045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:22.578 [2024-11-17 14:52:07.876052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:15:22.578 [2024-11-17 14:52:07.876058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:07.885686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.578 [2024-11-17 14:52:07.885712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:22.578 [2024-11-17 14:52:07.885720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.582 ms 00:15:22.578 [2024-11-17 14:52:07.885726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:07.886034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.578 [2024-11-17 14:52:07.886042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:22.578 [2024-11-17 14:52:07.886050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:15:22.578 [2024-11-17 14:52:07.886055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:07.920732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.578 [2024-11-17 14:52:07.920847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:22.578 [2024-11-17 14:52:07.920862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.578 [2024-11-17 14:52:07.920868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:07.920935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.578 [2024-11-17 14:52:07.920943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:22.578 [2024-11-17 14:52:07.920950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.578 [2024-11-17 14:52:07.920956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:07.921026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.578 [2024-11-17 14:52:07.921034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:22.578 [2024-11-17 14:52:07.921043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.578 [2024-11-17 14:52:07.921048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:07.921078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.578 [2024-11-17 14:52:07.921084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:22.578 [2024-11-17 14:52:07.921090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.578 [2024-11-17 14:52:07.921096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:07.984790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.578 [2024-11-17 14:52:07.984826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:22.578 [2024-11-17 14:52:07.984836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.578 [2024-11-17 14:52:07.984843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:08.033808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.578 [2024-11-17 14:52:08.033834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:22.578 [2024-11-17 14:52:08.033843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.578 [2024-11-17 14:52:08.033850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:08.033934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.578 [2024-11-17 14:52:08.033943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:22.578 [2024-11-17 14:52:08.033951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.578 [2024-11-17 14:52:08.033958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:08.034022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.578 [2024-11-17 14:52:08.034029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:22.578 [2024-11-17 14:52:08.034037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.578 [2024-11-17 14:52:08.034042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:08.034127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.578 [2024-11-17 14:52:08.034135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:22.578 [2024-11-17 14:52:08.034142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.578 [2024-11-17 14:52:08.034147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:08.034190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.578 [2024-11-17 14:52:08.034197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:22.578 [2024-11-17 14:52:08.034205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.578 [2024-11-17 14:52:08.034210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:08.034245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.578 [2024-11-17 14:52:08.034252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:22.578 [2024-11-17 14:52:08.034259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.578 [2024-11-17 14:52:08.034264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:08.034313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.578 [2024-11-17 14:52:08.034320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:22.578 [2024-11-17 14:52:08.034327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.578 [2024-11-17 14:52:08.034333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.578 [2024-11-17 14:52:08.034465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 269.501 ms, result 0 00:15:22.578 true 00:15:22.578 14:52:08 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72256 00:15:22.578 14:52:08 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 72256 ']' 00:15:22.578 14:52:08 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 72256 00:15:22.578 14:52:08 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:15:22.578 14:52:08 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.578 14:52:08 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72256 00:15:22.578 killing process with pid 72256 00:15:22.578 14:52:08 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.578 14:52:08 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.578 14:52:08 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72256' 00:15:22.578 14:52:08 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 72256 00:15:22.578 14:52:08 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 72256 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:29.143 14:52:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:29.143 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:29.143 fio-3.35 00:15:29.143 Starting 1 thread 00:15:34.414 00:15:34.414 test: (groupid=0, jobs=1): err= 0: pid=72435: Sun Nov 17 14:52:19 2024 00:15:34.414 read: IOPS=875, BW=58.2MiB/s (61.0MB/s)(255MiB/4376msec) 00:15:34.414 slat (nsec): min=3807, max=20461, avg=5326.79, stdev=1627.78 00:15:34.414 clat (usec): min=247, max=1730, avg=523.60, stdev=175.47 00:15:34.414 lat (usec): min=255, max=1735, avg=528.93, stdev=175.64 00:15:34.414 clat percentiles (usec): 00:15:34.414 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 388], 00:15:34.414 | 30.00th=[ 457], 40.00th=[ 474], 50.00th=[ 515], 60.00th=[ 523], 00:15:34.414 | 70.00th=[ 537], 80.00th=[ 644], 90.00th=[ 807], 95.00th=[ 848], 00:15:34.414 | 99.00th=[ 988], 99.50th=[ 1057], 99.90th=[ 1237], 99.95th=[ 1516], 00:15:34.414 | 99.99th=[ 1729] 00:15:34.414 write: IOPS=882, BW=58.6MiB/s (61.5MB/s)(256MiB/4369msec); 0 zone resets 00:15:34.414 slat (nsec): min=14305, max=46275, avg=19971.73, stdev=3525.29 00:15:34.414 clat (usec): min=264, max=1961, avg=578.52, stdev=186.91 00:15:34.414 lat (usec): min=279, max=1980, avg=598.50, stdev=185.76 00:15:34.414 clat percentiles (usec): 00:15:34.414 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 469], 00:15:34.414 | 30.00th=[ 523], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:15:34.414 | 70.00th=[ 594], 80.00th=[ 693], 90.00th=[ 889], 95.00th=[ 914], 00:15:34.414 | 99.00th=[ 1074], 99.50th=[ 1156], 99.90th=[ 1401], 99.95th=[ 1680], 00:15:34.414 | 99.99th=[ 1958] 00:15:34.414 bw ( KiB/s): min=45696, max=98464, per=100.00%, avg=60707.00, stdev=16424.89, samples=8 00:15:34.414 iops : min= 672, max= 1448, avg=892.75, stdev=241.54, samples=8 00:15:34.414 lat (usec) : 250=0.01%, 500=36.57%, 750=45.06%, 1000=17.30% 00:15:34.414 lat (msec) : 2=1.05% 00:15:34.414 cpu : usr=99.29%, sys=0.02%, ctx=9, majf=0, minf=1169 00:15:34.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:34.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.414 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:34.414 00:15:34.414 Run status group 0 (all jobs): 00:15:34.414 READ: bw=58.2MiB/s (61.0MB/s), 58.2MiB/s-58.2MiB/s (61.0MB/s-61.0MB/s), io=255MiB (267MB), run=4376-4376msec 00:15:34.414 WRITE: bw=58.6MiB/s (61.5MB/s), 58.6MiB/s-58.6MiB/s (61.5MB/s-61.5MB/s), io=256MiB (269MB), run=4369-4369msec 00:15:35.799 ----------------------------------------------------- 00:15:35.799 Suppressions used: 00:15:35.799 count bytes template 00:15:35.799 1 5 /usr/src/fio/parse.c 00:15:35.800 1 8 libtcmalloc_minimal.so 00:15:35.800 1 904 libcrypto.so 00:15:35.800 ----------------------------------------------------- 00:15:35.800 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:35.800 14:52:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:36.061 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:36.061 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:36.061 fio-3.35 00:15:36.061 Starting 2 threads 00:16:02.619 00:16:02.619 first_half: (groupid=0, jobs=1): err= 0: pid=72538: Sun Nov 17 14:52:45 2024 00:16:02.619 read: IOPS=2897, BW=11.3MiB/s (11.9MB/s)(255MiB/22504msec) 00:16:02.619 slat (nsec): min=3015, max=27926, avg=4075.29, stdev=1036.92 00:16:02.619 clat (usec): min=650, max=377256, avg=35153.32, stdev=18104.95 00:16:02.619 lat (usec): min=654, max=377262, avg=35157.39, stdev=18105.11 00:16:02.619 clat percentiles (msec): 00:16:02.619 | 1.00th=[ 8], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:16:02.619 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:16:02.619 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 50], 00:16:02.619 | 99.00th=[ 136], 99.50th=[ 148], 99.90th=[ 174], 99.95th=[ 296], 00:16:02.619 | 99.99th=[ 363] 00:16:02.619 write: IOPS=4144, BW=16.2MiB/s (17.0MB/s)(256MiB/15812msec); 0 zone resets 00:16:02.619 slat (usec): min=3, max=1303, avg= 5.83, stdev= 9.99 00:16:02.619 clat (usec): min=374, max=75599, avg=8958.84, stdev=14502.88 00:16:02.619 lat (usec): min=382, max=75605, avg=8964.67, stdev=14503.03 00:16:02.619 clat percentiles (usec): 00:16:02.619 | 1.00th=[ 676], 5.00th=[ 799], 10.00th=[ 947], 20.00th=[ 1172], 00:16:02.619 | 30.00th=[ 2311], 40.00th=[ 3490], 50.00th=[ 4424], 60.00th=[ 5080], 00:16:02.619 | 70.00th=[ 5669], 80.00th=[10945], 90.00th=[18744], 95.00th=[58983], 00:16:02.619 | 99.00th=[66847], 99.50th=[68682], 99.90th=[70779], 99.95th=[71828], 00:16:02.619 | 99.99th=[73925] 00:16:02.619 bw ( KiB/s): min= 6768, max=50608, per=100.00%, avg=27591.11, stdev=13520.34, samples=19 00:16:02.619 iops : min= 1692, max=12652, avg=6897.74, stdev=3380.08, samples=19 00:16:02.619 lat (usec) : 500=0.02%, 750=1.57%, 1000=4.55% 00:16:02.619 lat (msec) : 2=8.41%, 4=8.59%, 10=16.38%, 20=7.06%, 50=48.23% 00:16:02.619 lat (msec) : 100=4.13%, 250=1.03%, 500=0.03% 00:16:02.619 cpu : usr=99.18%, sys=0.18%, ctx=53, majf=0, minf=5569 00:16:02.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:02.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.619 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.619 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.619 second_half: (groupid=0, jobs=1): err= 0: pid=72539: Sun Nov 17 14:52:45 2024 00:16:02.619 read: IOPS=2872, BW=11.2MiB/s (11.8MB/s)(255MiB/22744msec) 00:16:02.619 slat (nsec): min=2945, max=26494, avg=5095.51, stdev=930.48 00:16:02.619 clat (usec): min=661, max=412657, avg=34709.42, stdev=19795.30 00:16:02.619 lat (usec): min=666, max=412663, avg=34714.52, stdev=19795.38 00:16:02.619 clat percentiles (msec): 00:16:02.619 | 1.00th=[ 12], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 31], 00:16:02.619 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:16:02.619 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 39], 95.00th=[ 46], 00:16:02.619 | 99.00th=[ 136], 99.50th=[ 150], 99.90th=[ 257], 99.95th=[ 388], 00:16:02.619 | 99.99th=[ 409] 00:16:02.619 write: IOPS=3190, BW=12.5MiB/s (13.1MB/s)(256MiB/20539msec); 0 zone resets 00:16:02.619 slat (usec): min=3, max=928, avg= 6.78, stdev= 6.05 00:16:02.619 clat (usec): min=375, max=76051, avg=9805.06, stdev=15217.38 00:16:02.619 lat (usec): min=382, max=76056, avg=9811.84, stdev=15217.68 00:16:02.619 clat percentiles (usec): 00:16:02.619 | 1.00th=[ 668], 5.00th=[ 775], 10.00th=[ 914], 20.00th=[ 1319], 00:16:02.619 | 30.00th=[ 2769], 40.00th=[ 3621], 50.00th=[ 4686], 60.00th=[ 5342], 00:16:02.619 | 70.00th=[ 5932], 80.00th=[12649], 90.00th=[24773], 95.00th=[59507], 00:16:02.619 | 99.00th=[67634], 99.50th=[69731], 99.90th=[71828], 99.95th=[72877], 00:16:02.619 | 99.99th=[74974] 00:16:02.619 bw ( KiB/s): min= 208, max=51968, per=93.36%, avg=23831.27, stdev=14712.07, samples=22 00:16:02.619 iops : min= 52, max=12992, avg=5957.82, stdev=3678.02, samples=22 00:16:02.619 lat (usec) : 500=0.02%, 750=1.89%, 1000=4.49% 00:16:02.619 lat (msec) : 2=5.26%, 4=10.25%, 10=17.15%, 20=7.25%, 50=48.66% 00:16:02.619 lat (msec) : 100=3.96%, 250=1.03%, 500=0.05% 00:16:02.619 cpu : usr=99.28%, sys=0.12%, ctx=37, majf=0, minf=5538 00:16:02.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:02.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.619 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.619 issued rwts: total=65328,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.619 00:16:02.619 Run status group 0 (all jobs): 00:16:02.619 READ: bw=22.4MiB/s (23.5MB/s), 11.2MiB/s-11.3MiB/s (11.8MB/s-11.9MB/s), io=510MiB (535MB), run=22504-22744msec 00:16:02.619 WRITE: bw=24.9MiB/s (26.1MB/s), 12.5MiB/s-16.2MiB/s (13.1MB/s-17.0MB/s), io=512MiB (537MB), run=15812-20539msec 00:16:02.619 ----------------------------------------------------- 00:16:02.619 Suppressions used: 00:16:02.619 count bytes template 00:16:02.619 2 10 /usr/src/fio/parse.c 00:16:02.619 2 192 /usr/src/fio/iolog.c 00:16:02.619 1 8 libtcmalloc_minimal.so 00:16:02.619 1 904 libcrypto.so 00:16:02.619 ----------------------------------------------------- 00:16:02.619 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:02.619 14:52:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:02.619 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:02.619 fio-3.35 00:16:02.619 Starting 1 thread 00:16:24.637 00:16:24.637 test: (groupid=0, jobs=1): err= 0: pid=72846: Sun Nov 17 14:53:06 2024 00:16:24.637 read: IOPS=6348, BW=24.8MiB/s (26.0MB/s)(255MiB/10271msec) 00:16:24.637 slat (nsec): min=2954, max=25066, avg=4605.58, stdev=1133.73 00:16:24.637 clat (usec): min=1010, max=40086, avg=20155.64, stdev=2370.38 00:16:24.637 lat (usec): min=1019, max=40089, avg=20160.24, stdev=2370.35 00:16:24.637 clat percentiles (usec): 00:16:24.637 | 1.00th=[15533], 5.00th=[16712], 10.00th=[17433], 20.00th=[18482], 00:16:24.637 | 30.00th=[19006], 40.00th=[19530], 50.00th=[20055], 60.00th=[20317], 00:16:24.637 | 70.00th=[20841], 80.00th=[21627], 90.00th=[22938], 95.00th=[24511], 00:16:24.637 | 99.00th=[27657], 99.50th=[28443], 99.90th=[31589], 99.95th=[35390], 00:16:24.637 | 99.99th=[39584] 00:16:24.637 write: IOPS=9131, BW=35.7MiB/s (37.4MB/s)(256MiB/7177msec); 0 zone resets 00:16:24.637 slat (usec): min=4, max=724, avg= 6.25, stdev= 6.24 00:16:24.637 clat (usec): min=684, max=77146, avg=13953.71, stdev=16606.10 00:16:24.637 lat (usec): min=689, max=77153, avg=13959.96, stdev=16606.10 00:16:24.637 clat percentiles (usec): 00:16:24.637 | 1.00th=[ 1254], 5.00th=[ 1532], 10.00th=[ 1713], 20.00th=[ 1975], 00:16:24.637 | 30.00th=[ 2343], 40.00th=[ 3425], 50.00th=[ 8848], 60.00th=[11207], 00:16:24.637 | 70.00th=[13698], 80.00th=[16581], 90.00th=[48497], 95.00th=[51643], 00:16:24.637 | 99.00th=[57410], 99.50th=[59507], 99.90th=[63177], 99.95th=[64226], 00:16:24.637 | 99.99th=[72877] 00:16:24.637 bw ( KiB/s): min=10888, max=47576, per=95.69%, avg=34952.53, stdev=8596.03, samples=15 00:16:24.637 iops : min= 2722, max=11894, avg=8738.13, stdev=2149.01, samples=15 00:16:24.637 lat (usec) : 750=0.01%, 1000=0.04% 00:16:24.637 lat (msec) : 2=10.33%, 4=10.17%, 10=7.04%, 20=39.58%, 50=29.03% 00:16:24.637 lat (msec) : 100=3.81% 00:16:24.637 cpu : usr=99.16%, sys=0.14%, ctx=37, majf=0, minf=5565 00:16:24.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:24.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.637 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.637 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.637 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.637 00:16:24.637 Run status group 0 (all jobs): 00:16:24.637 READ: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=255MiB (267MB), run=10271-10271msec 00:16:24.637 WRITE: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=256MiB (268MB), run=7177-7177msec 00:16:24.637 ----------------------------------------------------- 00:16:24.637 Suppressions used: 00:16:24.637 count bytes template 00:16:24.637 1 5 /usr/src/fio/parse.c 00:16:24.637 2 192 /usr/src/fio/iolog.c 00:16:24.637 1 8 libtcmalloc_minimal.so 00:16:24.637 1 904 libcrypto.so 00:16:24.637 ----------------------------------------------------- 00:16:24.637 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:24.637 Remove shared memory files 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57060 /dev/shm/spdk_tgt_trace.pid71164 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:24.637 ************************************ 00:16:24.637 END TEST ftl_fio_basic 00:16:24.637 ************************************ 00:16:24.637 00:16:24.637 real 1m7.568s 00:16:24.637 user 2m14.188s 00:16:24.637 sys 0m14.102s 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.637 14:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:24.637 14:53:08 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:24.637 14:53:08 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:24.637 14:53:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.637 14:53:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:24.637 ************************************ 00:16:24.637 START TEST ftl_bdevperf 00:16:24.637 ************************************ 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:24.637 * Looking for test storage... 00:16:24.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:24.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.637 --rc genhtml_branch_coverage=1 00:16:24.637 --rc genhtml_function_coverage=1 00:16:24.637 --rc genhtml_legend=1 00:16:24.637 --rc geninfo_all_blocks=1 00:16:24.637 --rc geninfo_unexecuted_blocks=1 00:16:24.637 00:16:24.637 ' 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:24.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.637 --rc genhtml_branch_coverage=1 00:16:24.637 --rc genhtml_function_coverage=1 00:16:24.637 --rc genhtml_legend=1 00:16:24.637 --rc geninfo_all_blocks=1 00:16:24.637 --rc geninfo_unexecuted_blocks=1 00:16:24.637 00:16:24.637 ' 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:24.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.637 --rc genhtml_branch_coverage=1 00:16:24.637 --rc genhtml_function_coverage=1 00:16:24.637 --rc genhtml_legend=1 00:16:24.637 --rc geninfo_all_blocks=1 00:16:24.637 --rc geninfo_unexecuted_blocks=1 00:16:24.637 00:16:24.637 ' 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:24.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.637 --rc genhtml_branch_coverage=1 00:16:24.637 --rc genhtml_function_coverage=1 00:16:24.637 --rc genhtml_legend=1 00:16:24.637 --rc geninfo_all_blocks=1 00:16:24.637 --rc geninfo_unexecuted_blocks=1 00:16:24.637 00:16:24.637 ' 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.637 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73128 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73128 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 73128 ']' 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:24.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.638 14:53:08 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:24.638 [2024-11-17 14:53:08.708531] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:24.638 [2024-11-17 14:53:08.708855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73128 ] 00:16:24.638 [2024-11-17 14:53:08.869860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.638 [2024-11-17 14:53:08.970356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:24.638 { 00:16:24.638 "name": "nvme0n1", 00:16:24.638 "aliases": [ 00:16:24.638 "10ea996f-3fad-4aa6-b813-20071cb942b1" 00:16:24.638 ], 00:16:24.638 "product_name": "NVMe disk", 00:16:24.638 "block_size": 4096, 00:16:24.638 "num_blocks": 1310720, 00:16:24.638 "uuid": "10ea996f-3fad-4aa6-b813-20071cb942b1", 00:16:24.638 "numa_id": -1, 00:16:24.638 "assigned_rate_limits": { 00:16:24.638 "rw_ios_per_sec": 0, 00:16:24.638 "rw_mbytes_per_sec": 0, 00:16:24.638 "r_mbytes_per_sec": 0, 00:16:24.638 "w_mbytes_per_sec": 0 00:16:24.638 }, 00:16:24.638 "claimed": true, 00:16:24.638 "claim_type": "read_many_write_one", 00:16:24.638 "zoned": false, 00:16:24.638 "supported_io_types": { 00:16:24.638 "read": true, 00:16:24.638 "write": true, 00:16:24.638 "unmap": true, 00:16:24.638 "flush": true, 00:16:24.638 "reset": true, 00:16:24.638 "nvme_admin": true, 00:16:24.638 "nvme_io": true, 00:16:24.638 "nvme_io_md": false, 00:16:24.638 "write_zeroes": true, 00:16:24.638 "zcopy": false, 00:16:24.638 "get_zone_info": false, 00:16:24.638 "zone_management": false, 00:16:24.638 "zone_append": false, 00:16:24.638 "compare": true, 00:16:24.638 "compare_and_write": false, 00:16:24.638 "abort": true, 00:16:24.638 "seek_hole": false, 00:16:24.638 "seek_data": false, 00:16:24.638 "copy": true, 00:16:24.638 "nvme_iov_md": false 00:16:24.638 }, 00:16:24.638 "driver_specific": { 00:16:24.638 "nvme": [ 00:16:24.638 { 00:16:24.638 "pci_address": "0000:00:11.0", 00:16:24.638 "trid": { 00:16:24.638 "trtype": "PCIe", 00:16:24.638 "traddr": "0000:00:11.0" 00:16:24.638 }, 00:16:24.638 "ctrlr_data": { 00:16:24.638 "cntlid": 0, 00:16:24.638 "vendor_id": "0x1b36", 00:16:24.638 "model_number": "QEMU NVMe Ctrl", 00:16:24.638 "serial_number": "12341", 00:16:24.638 "firmware_revision": "8.0.0", 00:16:24.638 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:24.638 "oacs": { 00:16:24.638 "security": 0, 00:16:24.638 "format": 1, 00:16:24.638 "firmware": 0, 00:16:24.638 "ns_manage": 1 00:16:24.638 }, 00:16:24.638 "multi_ctrlr": false, 00:16:24.638 "ana_reporting": false 00:16:24.638 }, 00:16:24.638 "vs": { 00:16:24.638 "nvme_version": "1.4" 00:16:24.638 }, 00:16:24.638 "ns_data": { 00:16:24.638 "id": 1, 00:16:24.638 "can_share": false 00:16:24.638 } 00:16:24.638 } 00:16:24.638 ], 00:16:24.638 "mp_policy": "active_passive" 00:16:24.638 } 00:16:24.638 } 00:16:24.638 ]' 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:24.638 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:24.638 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:24.638 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:24.638 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:16:24.638 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:24.638 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:24.638 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:24.638 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:24.638 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:24.900 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=0f6d0814-0116-4f6f-a63e-68882680cc5b 00:16:24.900 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:24.900 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f6d0814-0116-4f6f-a63e-68882680cc5b 00:16:25.161 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:25.161 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=eb803f8c-0184-400c-abd1-8e431638c260 00:16:25.161 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u eb803f8c-0184-400c-abd1-8e431638c260 00:16:25.421 14:53:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=1bb87da9-0c9b-4a68-aa0d-06717fef20b5 00:16:25.421 14:53:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1bb87da9-0c9b-4a68-aa0d-06717fef20b5 00:16:25.421 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:25.421 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:25.421 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=1bb87da9-0c9b-4a68-aa0d-06717fef20b5 00:16:25.421 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:25.421 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 1bb87da9-0c9b-4a68-aa0d-06717fef20b5 00:16:25.421 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=1bb87da9-0c9b-4a68-aa0d-06717fef20b5 00:16:25.421 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:25.421 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:25.421 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:25.421 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bb87da9-0c9b-4a68-aa0d-06717fef20b5 00:16:25.681 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:25.681 { 00:16:25.681 "name": "1bb87da9-0c9b-4a68-aa0d-06717fef20b5", 00:16:25.681 "aliases": [ 00:16:25.681 "lvs/nvme0n1p0" 00:16:25.681 ], 00:16:25.681 "product_name": "Logical Volume", 00:16:25.681 "block_size": 4096, 00:16:25.681 "num_blocks": 26476544, 00:16:25.681 "uuid": "1bb87da9-0c9b-4a68-aa0d-06717fef20b5", 00:16:25.681 "assigned_rate_limits": { 00:16:25.681 "rw_ios_per_sec": 0, 00:16:25.681 "rw_mbytes_per_sec": 0, 00:16:25.681 "r_mbytes_per_sec": 0, 00:16:25.681 "w_mbytes_per_sec": 0 00:16:25.681 }, 00:16:25.681 "claimed": false, 00:16:25.681 "zoned": false, 00:16:25.681 "supported_io_types": { 00:16:25.681 "read": true, 00:16:25.681 "write": true, 00:16:25.681 "unmap": true, 00:16:25.681 "flush": false, 00:16:25.681 "reset": true, 00:16:25.681 "nvme_admin": false, 00:16:25.681 "nvme_io": false, 00:16:25.681 "nvme_io_md": false, 00:16:25.681 "write_zeroes": true, 00:16:25.681 "zcopy": false, 00:16:25.681 "get_zone_info": false, 00:16:25.681 "zone_management": false, 00:16:25.681 "zone_append": false, 00:16:25.681 "compare": false, 00:16:25.681 "compare_and_write": false, 00:16:25.681 "abort": false, 00:16:25.681 "seek_hole": true, 00:16:25.681 "seek_data": true, 00:16:25.681 "copy": false, 00:16:25.681 "nvme_iov_md": false 00:16:25.681 }, 00:16:25.681 "driver_specific": { 00:16:25.681 "lvol": { 00:16:25.681 "lvol_store_uuid": "eb803f8c-0184-400c-abd1-8e431638c260", 00:16:25.681 "base_bdev": "nvme0n1", 00:16:25.681 "thin_provision": true, 00:16:25.681 "num_allocated_clusters": 0, 00:16:25.681 "snapshot": false, 00:16:25.681 "clone": false, 00:16:25.681 "esnap_clone": false 00:16:25.681 } 00:16:25.681 } 00:16:25.681 } 00:16:25.681 ]' 00:16:25.681 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:25.681 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:25.681 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:25.681 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:25.681 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:25.681 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:25.681 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:25.681 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:25.681 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:25.943 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:25.943 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:25.943 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 1bb87da9-0c9b-4a68-aa0d-06717fef20b5 00:16:25.943 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=1bb87da9-0c9b-4a68-aa0d-06717fef20b5 00:16:25.943 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:25.943 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:25.943 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:25.943 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bb87da9-0c9b-4a68-aa0d-06717fef20b5 00:16:26.205 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:26.205 { 00:16:26.205 "name": "1bb87da9-0c9b-4a68-aa0d-06717fef20b5", 00:16:26.205 "aliases": [ 00:16:26.205 "lvs/nvme0n1p0" 00:16:26.205 ], 00:16:26.205 "product_name": "Logical Volume", 00:16:26.205 "block_size": 4096, 00:16:26.205 "num_blocks": 26476544, 00:16:26.205 "uuid": "1bb87da9-0c9b-4a68-aa0d-06717fef20b5", 00:16:26.205 "assigned_rate_limits": { 00:16:26.205 "rw_ios_per_sec": 0, 00:16:26.205 "rw_mbytes_per_sec": 0, 00:16:26.205 "r_mbytes_per_sec": 0, 00:16:26.205 "w_mbytes_per_sec": 0 00:16:26.205 }, 00:16:26.205 "claimed": false, 00:16:26.205 "zoned": false, 00:16:26.205 "supported_io_types": { 00:16:26.205 "read": true, 00:16:26.205 "write": true, 00:16:26.205 "unmap": true, 00:16:26.205 "flush": false, 00:16:26.205 "reset": true, 00:16:26.205 "nvme_admin": false, 00:16:26.205 "nvme_io": false, 00:16:26.205 "nvme_io_md": false, 00:16:26.205 "write_zeroes": true, 00:16:26.205 "zcopy": false, 00:16:26.205 "get_zone_info": false, 00:16:26.205 "zone_management": false, 00:16:26.205 "zone_append": false, 00:16:26.205 "compare": false, 00:16:26.205 "compare_and_write": false, 00:16:26.205 "abort": false, 00:16:26.205 "seek_hole": true, 00:16:26.205 "seek_data": true, 00:16:26.205 "copy": false, 00:16:26.205 "nvme_iov_md": false 00:16:26.205 }, 00:16:26.205 "driver_specific": { 00:16:26.205 "lvol": { 00:16:26.205 "lvol_store_uuid": "eb803f8c-0184-400c-abd1-8e431638c260", 00:16:26.205 "base_bdev": "nvme0n1", 00:16:26.205 "thin_provision": true, 00:16:26.205 "num_allocated_clusters": 0, 00:16:26.205 "snapshot": false, 00:16:26.205 "clone": false, 00:16:26.205 "esnap_clone": false 00:16:26.205 } 00:16:26.205 } 00:16:26.205 } 00:16:26.205 ]' 00:16:26.205 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:26.205 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:26.205 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:26.205 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:26.205 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:26.205 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:26.205 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:26.205 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:26.465 14:53:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:26.465 14:53:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 1bb87da9-0c9b-4a68-aa0d-06717fef20b5 00:16:26.465 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=1bb87da9-0c9b-4a68-aa0d-06717fef20b5 00:16:26.465 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:26.465 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:26.465 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:26.465 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bb87da9-0c9b-4a68-aa0d-06717fef20b5 00:16:26.465 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:26.465 { 00:16:26.465 "name": "1bb87da9-0c9b-4a68-aa0d-06717fef20b5", 00:16:26.465 "aliases": [ 00:16:26.465 "lvs/nvme0n1p0" 00:16:26.465 ], 00:16:26.465 "product_name": "Logical Volume", 00:16:26.465 "block_size": 4096, 00:16:26.465 "num_blocks": 26476544, 00:16:26.465 "uuid": "1bb87da9-0c9b-4a68-aa0d-06717fef20b5", 00:16:26.465 "assigned_rate_limits": { 00:16:26.465 "rw_ios_per_sec": 0, 00:16:26.465 "rw_mbytes_per_sec": 0, 00:16:26.465 "r_mbytes_per_sec": 0, 00:16:26.465 "w_mbytes_per_sec": 0 00:16:26.465 }, 00:16:26.465 "claimed": false, 00:16:26.465 "zoned": false, 00:16:26.465 "supported_io_types": { 00:16:26.465 "read": true, 00:16:26.465 "write": true, 00:16:26.465 "unmap": true, 00:16:26.465 "flush": false, 00:16:26.465 "reset": true, 00:16:26.465 "nvme_admin": false, 00:16:26.465 "nvme_io": false, 00:16:26.465 "nvme_io_md": false, 00:16:26.465 "write_zeroes": true, 00:16:26.465 "zcopy": false, 00:16:26.465 "get_zone_info": false, 00:16:26.465 "zone_management": false, 00:16:26.465 "zone_append": false, 00:16:26.465 "compare": false, 00:16:26.465 "compare_and_write": false, 00:16:26.465 "abort": false, 00:16:26.465 "seek_hole": true, 00:16:26.465 "seek_data": true, 00:16:26.465 "copy": false, 00:16:26.465 "nvme_iov_md": false 00:16:26.465 }, 00:16:26.465 "driver_specific": { 00:16:26.465 "lvol": { 00:16:26.465 "lvol_store_uuid": "eb803f8c-0184-400c-abd1-8e431638c260", 00:16:26.465 "base_bdev": "nvme0n1", 00:16:26.465 "thin_provision": true, 00:16:26.465 "num_allocated_clusters": 0, 00:16:26.465 "snapshot": false, 00:16:26.465 "clone": false, 00:16:26.465 "esnap_clone": false 00:16:26.465 } 00:16:26.465 } 00:16:26.465 } 00:16:26.465 ]' 00:16:26.465 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:26.727 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:26.727 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:26.727 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:26.727 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:26.727 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:26.727 14:53:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:26.727 14:53:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1bb87da9-0c9b-4a68-aa0d-06717fef20b5 -c nvc0n1p0 --l2p_dram_limit 20 00:16:26.727 [2024-11-17 14:53:12.209937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.727 [2024-11-17 14:53:12.209983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:26.727 [2024-11-17 14:53:12.209997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:26.727 [2024-11-17 14:53:12.210007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.727 [2024-11-17 14:53:12.210052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.727 [2024-11-17 14:53:12.210065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:26.727 [2024-11-17 14:53:12.210074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:26.727 [2024-11-17 14:53:12.210083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.727 [2024-11-17 14:53:12.210099] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:26.727 [2024-11-17 14:53:12.210798] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:26.727 [2024-11-17 14:53:12.210813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.727 [2024-11-17 14:53:12.210823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:26.727 [2024-11-17 14:53:12.210831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:16:26.727 [2024-11-17 14:53:12.210840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.727 [2024-11-17 14:53:12.210866] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6ed969e4-39ef-43a0-a959-035f873d6172 00:16:26.727 [2024-11-17 14:53:12.211905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.727 [2024-11-17 14:53:12.212256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:26.727 [2024-11-17 14:53:12.212285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:26.727 [2024-11-17 14:53:12.212297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.727 [2024-11-17 14:53:12.217433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.727 [2024-11-17 14:53:12.217462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:26.727 [2024-11-17 14:53:12.217474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.034 ms 00:16:26.727 [2024-11-17 14:53:12.217481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.727 [2024-11-17 14:53:12.217564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.727 [2024-11-17 14:53:12.217573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:26.727 [2024-11-17 14:53:12.217586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:26.727 [2024-11-17 14:53:12.217593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.727 [2024-11-17 14:53:12.217632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.727 [2024-11-17 14:53:12.217642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:26.727 [2024-11-17 14:53:12.217651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:26.727 [2024-11-17 14:53:12.217658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.727 [2024-11-17 14:53:12.217678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:26.727 [2024-11-17 14:53:12.221261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.727 [2024-11-17 14:53:12.221291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:26.727 [2024-11-17 14:53:12.221300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.590 ms 00:16:26.727 [2024-11-17 14:53:12.221310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.727 [2024-11-17 14:53:12.221339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.727 [2024-11-17 14:53:12.221348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:26.727 [2024-11-17 14:53:12.221356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:26.727 [2024-11-17 14:53:12.221365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.727 [2024-11-17 14:53:12.221385] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:26.727 [2024-11-17 14:53:12.221523] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:26.727 [2024-11-17 14:53:12.221535] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:26.727 [2024-11-17 14:53:12.221547] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:26.727 [2024-11-17 14:53:12.221556] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:26.727 [2024-11-17 14:53:12.221567] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:26.727 [2024-11-17 14:53:12.221574] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:26.727 [2024-11-17 14:53:12.221583] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:26.727 [2024-11-17 14:53:12.221590] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:26.727 [2024-11-17 14:53:12.221597] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:26.727 [2024-11-17 14:53:12.221605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.727 [2024-11-17 14:53:12.221616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:26.727 [2024-11-17 14:53:12.221623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:16:26.727 [2024-11-17 14:53:12.221631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.727 [2024-11-17 14:53:12.221711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.727 [2024-11-17 14:53:12.221722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:26.727 [2024-11-17 14:53:12.221729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:26.727 [2024-11-17 14:53:12.221738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.727 [2024-11-17 14:53:12.221839] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:26.727 [2024-11-17 14:53:12.221851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:26.727 [2024-11-17 14:53:12.221861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:26.727 [2024-11-17 14:53:12.221870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.728 [2024-11-17 14:53:12.221877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:26.728 [2024-11-17 14:53:12.221885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:26.728 [2024-11-17 14:53:12.221892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:26.728 [2024-11-17 14:53:12.221901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:26.728 [2024-11-17 14:53:12.221908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:26.728 [2024-11-17 14:53:12.221916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:26.728 [2024-11-17 14:53:12.221934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:26.728 [2024-11-17 14:53:12.221942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:26.728 [2024-11-17 14:53:12.221949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:26.728 [2024-11-17 14:53:12.221964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:26.728 [2024-11-17 14:53:12.221971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:26.728 [2024-11-17 14:53:12.221980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.728 [2024-11-17 14:53:12.221986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:26.728 [2024-11-17 14:53:12.221995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:26.728 [2024-11-17 14:53:12.222001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.728 [2024-11-17 14:53:12.222011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:26.728 [2024-11-17 14:53:12.222018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:26.728 [2024-11-17 14:53:12.222027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.728 [2024-11-17 14:53:12.222033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:26.728 [2024-11-17 14:53:12.222042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:26.728 [2024-11-17 14:53:12.222048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.728 [2024-11-17 14:53:12.222056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:26.728 [2024-11-17 14:53:12.222062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:26.728 [2024-11-17 14:53:12.222071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.728 [2024-11-17 14:53:12.222077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:26.728 [2024-11-17 14:53:12.222085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:26.728 [2024-11-17 14:53:12.222092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.728 [2024-11-17 14:53:12.222101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:26.728 [2024-11-17 14:53:12.222108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:26.728 [2024-11-17 14:53:12.222116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:26.728 [2024-11-17 14:53:12.222122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:26.728 [2024-11-17 14:53:12.222130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:26.728 [2024-11-17 14:53:12.222136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:26.728 [2024-11-17 14:53:12.222144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:26.728 [2024-11-17 14:53:12.222150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:26.728 [2024-11-17 14:53:12.222158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.728 [2024-11-17 14:53:12.222164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:26.728 [2024-11-17 14:53:12.222172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:26.728 [2024-11-17 14:53:12.222178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.728 [2024-11-17 14:53:12.222186] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:26.728 [2024-11-17 14:53:12.222193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:26.728 [2024-11-17 14:53:12.222203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:26.728 [2024-11-17 14:53:12.222210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.728 [2024-11-17 14:53:12.222222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:26.728 [2024-11-17 14:53:12.222228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:26.728 [2024-11-17 14:53:12.222236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:26.728 [2024-11-17 14:53:12.222243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:26.728 [2024-11-17 14:53:12.222251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:26.728 [2024-11-17 14:53:12.222257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:26.728 [2024-11-17 14:53:12.222269] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:26.728 [2024-11-17 14:53:12.222278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:26.728 [2024-11-17 14:53:12.222288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:26.728 [2024-11-17 14:53:12.222295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:26.728 [2024-11-17 14:53:12.222304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:26.728 [2024-11-17 14:53:12.222311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:26.728 [2024-11-17 14:53:12.222319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:26.728 [2024-11-17 14:53:12.222326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:26.728 [2024-11-17 14:53:12.222336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:26.728 [2024-11-17 14:53:12.222342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:26.728 [2024-11-17 14:53:12.222352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:26.728 [2024-11-17 14:53:12.222359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:26.728 [2024-11-17 14:53:12.222368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:26.728 [2024-11-17 14:53:12.222375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:26.728 [2024-11-17 14:53:12.222384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:26.728 [2024-11-17 14:53:12.222391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:26.728 [2024-11-17 14:53:12.222400] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:26.728 [2024-11-17 14:53:12.222408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:26.728 [2024-11-17 14:53:12.222418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:26.728 [2024-11-17 14:53:12.222427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:26.728 [2024-11-17 14:53:12.222435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:26.728 [2024-11-17 14:53:12.222442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:26.728 [2024-11-17 14:53:12.222451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.728 [2024-11-17 14:53:12.222460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:26.728 [2024-11-17 14:53:12.222470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:16:26.728 [2024-11-17 14:53:12.222477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.728 [2024-11-17 14:53:12.222515] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:26.728 [2024-11-17 14:53:12.222524] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:30.937 [2024-11-17 14:53:16.009533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.937 [2024-11-17 14:53:16.009617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:30.937 [2024-11-17 14:53:16.009642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3787.000 ms 00:16:30.937 [2024-11-17 14:53:16.009651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.937 [2024-11-17 14:53:16.041695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.937 [2024-11-17 14:53:16.041759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:30.937 [2024-11-17 14:53:16.041777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.791 ms 00:16:30.937 [2024-11-17 14:53:16.041786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.937 [2024-11-17 14:53:16.041952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.937 [2024-11-17 14:53:16.041965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:30.937 [2024-11-17 14:53:16.041980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:16:30.937 [2024-11-17 14:53:16.041989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.937 [2024-11-17 14:53:16.093247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.937 [2024-11-17 14:53:16.093307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:30.937 [2024-11-17 14:53:16.093325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.219 ms 00:16:30.937 [2024-11-17 14:53:16.093334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.937 [2024-11-17 14:53:16.093378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.937 [2024-11-17 14:53:16.093392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:30.937 [2024-11-17 14:53:16.093403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:30.937 [2024-11-17 14:53:16.093411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.937 [2024-11-17 14:53:16.094035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.937 [2024-11-17 14:53:16.094061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:30.937 [2024-11-17 14:53:16.094075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:16:30.937 [2024-11-17 14:53:16.094084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.937 [2024-11-17 14:53:16.094211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.937 [2024-11-17 14:53:16.094222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:30.937 [2024-11-17 14:53:16.094236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:16:30.937 [2024-11-17 14:53:16.094245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.937 [2024-11-17 14:53:16.110005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.937 [2024-11-17 14:53:16.110046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:30.937 [2024-11-17 14:53:16.110061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.739 ms 00:16:30.937 [2024-11-17 14:53:16.110069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.937 [2024-11-17 14:53:16.123190] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:30.937 [2024-11-17 14:53:16.130177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.938 [2024-11-17 14:53:16.130227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:30.938 [2024-11-17 14:53:16.130239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.020 ms 00:16:30.938 [2024-11-17 14:53:16.130250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.938 [2024-11-17 14:53:16.232488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.938 [2024-11-17 14:53:16.232801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:30.938 [2024-11-17 14:53:16.232826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.210 ms 00:16:30.938 [2024-11-17 14:53:16.232837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.938 [2024-11-17 14:53:16.233162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.938 [2024-11-17 14:53:16.233197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:30.938 [2024-11-17 14:53:16.233209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:16:30.938 [2024-11-17 14:53:16.233219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.938 [2024-11-17 14:53:16.259791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.938 [2024-11-17 14:53:16.260018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:30.938 [2024-11-17 14:53:16.260041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.513 ms 00:16:30.938 [2024-11-17 14:53:16.260054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.938 [2024-11-17 14:53:16.284873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.938 [2024-11-17 14:53:16.284939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:30.938 [2024-11-17 14:53:16.284953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.779 ms 00:16:30.938 [2024-11-17 14:53:16.284963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.938 [2024-11-17 14:53:16.285592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.938 [2024-11-17 14:53:16.285622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:30.938 [2024-11-17 14:53:16.285633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:16:30.938 [2024-11-17 14:53:16.285644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.938 [2024-11-17 14:53:16.369901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.938 [2024-11-17 14:53:16.369966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:30.938 [2024-11-17 14:53:16.369980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.220 ms 00:16:30.938 [2024-11-17 14:53:16.369992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.938 [2024-11-17 14:53:16.397112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.938 [2024-11-17 14:53:16.397165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:30.938 [2024-11-17 14:53:16.397178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.032 ms 00:16:30.938 [2024-11-17 14:53:16.397192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.938 [2024-11-17 14:53:16.422711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.938 [2024-11-17 14:53:16.422771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:30.938 [2024-11-17 14:53:16.422784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.472 ms 00:16:30.938 [2024-11-17 14:53:16.422794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.938 [2024-11-17 14:53:16.448887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.938 [2024-11-17 14:53:16.448957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:30.938 [2024-11-17 14:53:16.448971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.048 ms 00:16:30.938 [2024-11-17 14:53:16.448982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.938 [2024-11-17 14:53:16.449034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.938 [2024-11-17 14:53:16.449050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:30.938 [2024-11-17 14:53:16.449060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:30.938 [2024-11-17 14:53:16.449070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.938 [2024-11-17 14:53:16.449161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.938 [2024-11-17 14:53:16.449176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:30.938 [2024-11-17 14:53:16.449185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:30.938 [2024-11-17 14:53:16.449196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.938 [2024-11-17 14:53:16.450314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4239.861 ms, result 0 00:16:30.938 { 00:16:30.938 "name": "ftl0", 00:16:30.938 "uuid": "6ed969e4-39ef-43a0-a959-035f873d6172" 00:16:30.938 } 00:16:30.938 14:53:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:31.200 14:53:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:31.200 14:53:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:31.200 14:53:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:31.462 [2024-11-17 14:53:16.786511] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:31.462 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:31.462 Zero copy mechanism will not be used. 00:16:31.462 Running I/O for 4 seconds... 00:16:33.352 670.00 IOPS, 44.49 MiB/s [2024-11-17T14:53:19.837Z] 780.50 IOPS, 51.83 MiB/s [2024-11-17T14:53:21.224Z] 786.00 IOPS, 52.20 MiB/s [2024-11-17T14:53:21.225Z] 759.25 IOPS, 50.42 MiB/s 00:16:35.682 Latency(us) 00:16:35.682 [2024-11-17T14:53:21.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.682 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:35.682 ftl0 : 4.00 758.91 50.40 0.00 0.00 1394.58 178.02 4108.60 00:16:35.682 [2024-11-17T14:53:21.225Z] =================================================================================================================== 00:16:35.682 [2024-11-17T14:53:21.225Z] Total : 758.91 50.40 0.00 0.00 1394.58 178.02 4108.60 00:16:35.682 { 00:16:35.682 "results": [ 00:16:35.682 { 00:16:35.682 "job": "ftl0", 00:16:35.682 "core_mask": "0x1", 00:16:35.682 "workload": "randwrite", 00:16:35.682 "status": "finished", 00:16:35.682 "queue_depth": 1, 00:16:35.682 "io_size": 69632, 00:16:35.682 "runtime": 4.003118, 00:16:35.682 "iops": 758.9084308781305, 00:16:35.682 "mibps": 50.396262988000856, 00:16:35.682 "io_failed": 0, 00:16:35.682 "io_timeout": 0, 00:16:35.682 "avg_latency_us": 1394.5800617815364, 00:16:35.682 "min_latency_us": 178.01846153846154, 00:16:35.682 "max_latency_us": 4108.6030769230765 00:16:35.682 } 00:16:35.682 ], 00:16:35.682 "core_count": 1 00:16:35.682 } 00:16:35.682 [2024-11-17 14:53:20.799371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:35.682 14:53:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:35.682 [2024-11-17 14:53:20.911971] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:35.682 Running I/O for 4 seconds... 00:16:37.570 5833.00 IOPS, 22.79 MiB/s [2024-11-17T14:53:24.057Z] 5205.50 IOPS, 20.33 MiB/s [2024-11-17T14:53:25.010Z] 5360.33 IOPS, 20.94 MiB/s [2024-11-17T14:53:25.010Z] 5321.75 IOPS, 20.79 MiB/s 00:16:39.467 Latency(us) 00:16:39.467 [2024-11-17T14:53:25.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.467 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.467 ftl0 : 4.04 5303.18 20.72 0.00 0.00 24024.22 245.76 56865.08 00:16:39.467 [2024-11-17T14:53:25.010Z] =================================================================================================================== 00:16:39.467 [2024-11-17T14:53:25.010Z] Total : 5303.18 20.72 0.00 0.00 24024.22 0.00 56865.08 00:16:39.467 [2024-11-17 14:53:24.965004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:16:39.467 "results": [ 00:16:39.467 { 00:16:39.467 "job": "ftl0", 00:16:39.467 "core_mask": "0x1", 00:16:39.467 "workload": "randwrite", 00:16:39.467 "status": "finished", 00:16:39.467 "queue_depth": 128, 00:16:39.467 "io_size": 4096, 00:16:39.467 "runtime": 4.036258, 00:16:39.467 "iops": 5303.179330954562, 00:16:39.467 "mibps": 20.715544261541257, 00:16:39.467 "io_failed": 0, 00:16:39.467 "io_timeout": 0, 00:16:39.467 "avg_latency_us": 24024.216699333367, 00:16:39.467 "min_latency_us": 245.76, 00:16:39.467 "max_latency_us": 56865.083076923074 00:16:39.467 } 00:16:39.467 ], 00:16:39.467 "core_count": 1 00:16:39.467 } 00:16:39.467 l0 00:16:39.467 14:53:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:39.728 [2024-11-17 14:53:25.084262] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:39.728 Running I/O for 4 seconds... 00:16:41.618 5079.00 IOPS, 19.84 MiB/s [2024-11-17T14:53:28.106Z] 4895.00 IOPS, 19.12 MiB/s [2024-11-17T14:53:29.495Z] 4799.67 IOPS, 18.75 MiB/s [2024-11-17T14:53:29.495Z] 4718.75 IOPS, 18.43 MiB/s 00:16:43.952 Latency(us) 00:16:43.952 [2024-11-17T14:53:29.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.952 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:43.952 Verification LBA range: start 0x0 length 0x1400000 00:16:43.952 ftl0 : 4.02 4731.27 18.48 0.00 0.00 26975.88 299.32 44766.13 00:16:43.952 [2024-11-17T14:53:29.495Z] =================================================================================================================== 00:16:43.952 [2024-11-17T14:53:29.495Z] Total : 4731.27 18.48 0.00 0.00 26975.88 0.00 44766.13 00:16:43.952 { 00:16:43.952 "results": [ 00:16:43.952 { 00:16:43.952 "job": "ftl0", 00:16:43.952 "core_mask": "0x1", 00:16:43.952 "workload": "verify", 00:16:43.952 "status": "finished", 00:16:43.952 "verify_range": { 00:16:43.952 "start": 0, 00:16:43.952 "length": 20971520 00:16:43.952 }, 00:16:43.952 "queue_depth": 128, 00:16:43.952 "io_size": 4096, 00:16:43.952 "runtime": 4.016465, 00:16:43.952 "iops": 4731.2748897351275, 00:16:43.952 "mibps": 18.481542538027842, 00:16:43.952 "io_failed": 0, 00:16:43.952 "io_timeout": 0, 00:16:43.952 "avg_latency_us": 26975.882377762213, 00:16:43.952 "min_latency_us": 299.32307692307694, 00:16:43.952 "max_latency_us": 44766.12923076923 00:16:43.952 } 00:16:43.952 ], 00:16:43.952 "core_count": 1 00:16:43.952 } 00:16:43.952 [2024-11-17 14:53:29.126064] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:43.952 14:53:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:43.952 [2024-11-17 14:53:29.342912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.952 [2024-11-17 14:53:29.342989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:43.952 [2024-11-17 14:53:29.343006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:43.952 [2024-11-17 14:53:29.343017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.952 [2024-11-17 14:53:29.343042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:43.952 [2024-11-17 14:53:29.346120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.952 [2024-11-17 14:53:29.346163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:43.952 [2024-11-17 14:53:29.346179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:16:43.952 [2024-11-17 14:53:29.346188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.952 [2024-11-17 14:53:29.349143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.952 [2024-11-17 14:53:29.349324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:43.952 [2024-11-17 14:53:29.349351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.917 ms 00:16:43.952 [2024-11-17 14:53:29.349360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.215 [2024-11-17 14:53:29.587319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.215 [2024-11-17 14:53:29.587375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:44.216 [2024-11-17 14:53:29.587396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 237.917 ms 00:16:44.216 [2024-11-17 14:53:29.587405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.216 [2024-11-17 14:53:29.593821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.216 [2024-11-17 14:53:29.593867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:44.216 [2024-11-17 14:53:29.593884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.364 ms 00:16:44.216 [2024-11-17 14:53:29.593892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.216 [2024-11-17 14:53:29.620528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.216 [2024-11-17 14:53:29.620576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:44.216 [2024-11-17 14:53:29.620592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.539 ms 00:16:44.216 [2024-11-17 14:53:29.620600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.216 [2024-11-17 14:53:29.638330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.216 [2024-11-17 14:53:29.638380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:44.216 [2024-11-17 14:53:29.638401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.674 ms 00:16:44.216 [2024-11-17 14:53:29.638409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.216 [2024-11-17 14:53:29.638572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.216 [2024-11-17 14:53:29.638584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:44.216 [2024-11-17 14:53:29.638600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:16:44.216 [2024-11-17 14:53:29.638608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.216 [2024-11-17 14:53:29.664855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.216 [2024-11-17 14:53:29.664904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:44.216 [2024-11-17 14:53:29.664934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.224 ms 00:16:44.216 [2024-11-17 14:53:29.664942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.216 [2024-11-17 14:53:29.690873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.216 [2024-11-17 14:53:29.691078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:44.216 [2024-11-17 14:53:29.691106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.873 ms 00:16:44.216 [2024-11-17 14:53:29.691115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.216 [2024-11-17 14:53:29.716162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.216 [2024-11-17 14:53:29.716209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:44.216 [2024-11-17 14:53:29.716224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.960 ms 00:16:44.216 [2024-11-17 14:53:29.716232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.216 [2024-11-17 14:53:29.741192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.216 [2024-11-17 14:53:29.741240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:44.216 [2024-11-17 14:53:29.741258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.864 ms 00:16:44.216 [2024-11-17 14:53:29.741265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.216 [2024-11-17 14:53:29.741314] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:44.216 [2024-11-17 14:53:29.741330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:44.216 [2024-11-17 14:53:29.741767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.741994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:44.217 [2024-11-17 14:53:29.742297] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:44.217 [2024-11-17 14:53:29.742308] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ed969e4-39ef-43a0-a959-035f873d6172 00:16:44.217 [2024-11-17 14:53:29.742317] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:44.217 [2024-11-17 14:53:29.742327] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:44.217 [2024-11-17 14:53:29.742336] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:44.217 [2024-11-17 14:53:29.742362] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:44.217 [2024-11-17 14:53:29.742370] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:44.217 [2024-11-17 14:53:29.742380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:44.217 [2024-11-17 14:53:29.742387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:44.217 [2024-11-17 14:53:29.742418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:44.217 [2024-11-17 14:53:29.742426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:44.217 [2024-11-17 14:53:29.742435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.217 [2024-11-17 14:53:29.742443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:44.217 [2024-11-17 14:53:29.742454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:16:44.217 [2024-11-17 14:53:29.742462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.756469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.479 [2024-11-17 14:53:29.756514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:44.479 [2024-11-17 14:53:29.756527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.964 ms 00:16:44.479 [2024-11-17 14:53:29.756535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.756971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.479 [2024-11-17 14:53:29.757003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:44.479 [2024-11-17 14:53:29.757021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:16:44.479 [2024-11-17 14:53:29.757034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.796182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.479 [2024-11-17 14:53:29.796384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.479 [2024-11-17 14:53:29.796413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.479 [2024-11-17 14:53:29.796422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.796493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.479 [2024-11-17 14:53:29.796502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.479 [2024-11-17 14:53:29.796513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.479 [2024-11-17 14:53:29.796520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.796605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.479 [2024-11-17 14:53:29.796619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.479 [2024-11-17 14:53:29.796629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.479 [2024-11-17 14:53:29.796637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.796655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.479 [2024-11-17 14:53:29.796663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.479 [2024-11-17 14:53:29.796673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.479 [2024-11-17 14:53:29.796681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.880836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.479 [2024-11-17 14:53:29.880891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.479 [2024-11-17 14:53:29.880909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.479 [2024-11-17 14:53:29.880933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.950158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.479 [2024-11-17 14:53:29.950213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.479 [2024-11-17 14:53:29.950229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.479 [2024-11-17 14:53:29.950238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.950347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.479 [2024-11-17 14:53:29.950358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.479 [2024-11-17 14:53:29.950373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.479 [2024-11-17 14:53:29.950381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.950426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.479 [2024-11-17 14:53:29.950436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.479 [2024-11-17 14:53:29.950447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.479 [2024-11-17 14:53:29.950455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.950560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.479 [2024-11-17 14:53:29.950571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.479 [2024-11-17 14:53:29.950587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.479 [2024-11-17 14:53:29.950595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.950629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.479 [2024-11-17 14:53:29.950638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:44.479 [2024-11-17 14:53:29.950648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.479 [2024-11-17 14:53:29.950657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.950699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.479 [2024-11-17 14:53:29.950708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.479 [2024-11-17 14:53:29.950718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.479 [2024-11-17 14:53:29.950728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.950777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.479 [2024-11-17 14:53:29.950796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.479 [2024-11-17 14:53:29.950807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.479 [2024-11-17 14:53:29.950815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.479 [2024-11-17 14:53:29.951020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 607.998 ms, result 0 00:16:44.479 true 00:16:44.479 14:53:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73128 00:16:44.479 14:53:29 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 73128 ']' 00:16:44.479 14:53:29 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 73128 00:16:44.479 14:53:29 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:16:44.479 14:53:29 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.479 14:53:29 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73128 00:16:44.479 killing process with pid 73128 00:16:44.479 Received shutdown signal, test time was about 4.000000 seconds 00:16:44.479 00:16:44.479 Latency(us) 00:16:44.479 [2024-11-17T14:53:30.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.479 [2024-11-17T14:53:30.022Z] =================================================================================================================== 00:16:44.479 [2024-11-17T14:53:30.022Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.479 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.479 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.479 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73128' 00:16:44.479 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 73128 00:16:44.479 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 73128 00:16:45.422 Remove shared memory files 00:16:45.422 14:53:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:45.422 14:53:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:45.422 14:53:30 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:45.422 14:53:30 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:45.422 14:53:30 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:45.422 14:53:30 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:45.422 14:53:30 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:45.422 14:53:30 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:45.422 ************************************ 00:16:45.422 END TEST ftl_bdevperf 00:16:45.422 ************************************ 00:16:45.422 00:16:45.422 real 0m22.453s 00:16:45.422 user 0m24.746s 00:16:45.422 sys 0m0.992s 00:16:45.422 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.422 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 14:53:30 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:45.685 14:53:30 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:45.685 14:53:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.685 14:53:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 ************************************ 00:16:45.685 START TEST ftl_trim 00:16:45.685 ************************************ 00:16:45.685 14:53:30 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:45.685 * Looking for test storage... 00:16:45.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:45.685 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:45.685 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:16:45.685 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:45.685 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.685 14:53:31 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:45.685 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.685 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:45.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.685 --rc genhtml_branch_coverage=1 00:16:45.685 --rc genhtml_function_coverage=1 00:16:45.685 --rc genhtml_legend=1 00:16:45.685 --rc geninfo_all_blocks=1 00:16:45.685 --rc geninfo_unexecuted_blocks=1 00:16:45.685 00:16:45.685 ' 00:16:45.685 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:45.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.685 --rc genhtml_branch_coverage=1 00:16:45.685 --rc genhtml_function_coverage=1 00:16:45.685 --rc genhtml_legend=1 00:16:45.685 --rc geninfo_all_blocks=1 00:16:45.685 --rc geninfo_unexecuted_blocks=1 00:16:45.685 00:16:45.685 ' 00:16:45.685 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:45.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.685 --rc genhtml_branch_coverage=1 00:16:45.685 --rc genhtml_function_coverage=1 00:16:45.685 --rc genhtml_legend=1 00:16:45.685 --rc geninfo_all_blocks=1 00:16:45.685 --rc geninfo_unexecuted_blocks=1 00:16:45.686 00:16:45.686 ' 00:16:45.686 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.686 --rc genhtml_branch_coverage=1 00:16:45.686 --rc genhtml_function_coverage=1 00:16:45.686 --rc genhtml_legend=1 00:16:45.686 --rc geninfo_all_blocks=1 00:16:45.686 --rc geninfo_unexecuted_blocks=1 00:16:45.686 00:16:45.686 ' 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73485 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73485 00:16:45.686 14:53:31 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:45.686 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73485 ']' 00:16:45.686 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.686 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.686 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.686 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.686 14:53:31 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:45.947 [2024-11-17 14:53:31.267235] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:45.947 [2024-11-17 14:53:31.267650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73485 ] 00:16:45.947 [2024-11-17 14:53:31.433826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.209 [2024-11-17 14:53:31.559813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.209 [2024-11-17 14:53:31.560129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.209 [2024-11-17 14:53:31.560130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.782 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.782 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:46.782 14:53:32 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:46.782 14:53:32 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:46.782 14:53:32 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:46.782 14:53:32 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:46.782 14:53:32 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:46.782 14:53:32 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:47.044 14:53:32 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:47.044 14:53:32 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:47.045 14:53:32 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:47.045 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:47.045 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:47.045 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:47.045 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:47.045 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:47.307 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:47.307 { 00:16:47.307 "name": "nvme0n1", 00:16:47.307 "aliases": [ 00:16:47.307 "14500478-171a-4f6a-9815-bd7c8c413041" 00:16:47.307 ], 00:16:47.307 "product_name": "NVMe disk", 00:16:47.307 "block_size": 4096, 00:16:47.307 "num_blocks": 1310720, 00:16:47.307 "uuid": "14500478-171a-4f6a-9815-bd7c8c413041", 00:16:47.307 "numa_id": -1, 00:16:47.307 "assigned_rate_limits": { 00:16:47.307 "rw_ios_per_sec": 0, 00:16:47.307 "rw_mbytes_per_sec": 0, 00:16:47.307 "r_mbytes_per_sec": 0, 00:16:47.307 "w_mbytes_per_sec": 0 00:16:47.307 }, 00:16:47.307 "claimed": true, 00:16:47.307 "claim_type": "read_many_write_one", 00:16:47.307 "zoned": false, 00:16:47.307 "supported_io_types": { 00:16:47.307 "read": true, 00:16:47.307 "write": true, 00:16:47.307 "unmap": true, 00:16:47.307 "flush": true, 00:16:47.307 "reset": true, 00:16:47.307 "nvme_admin": true, 00:16:47.307 "nvme_io": true, 00:16:47.307 "nvme_io_md": false, 00:16:47.307 "write_zeroes": true, 00:16:47.307 "zcopy": false, 00:16:47.307 "get_zone_info": false, 00:16:47.307 "zone_management": false, 00:16:47.307 "zone_append": false, 00:16:47.307 "compare": true, 00:16:47.307 "compare_and_write": false, 00:16:47.307 "abort": true, 00:16:47.307 "seek_hole": false, 00:16:47.307 "seek_data": false, 00:16:47.307 "copy": true, 00:16:47.307 "nvme_iov_md": false 00:16:47.307 }, 00:16:47.307 "driver_specific": { 00:16:47.307 "nvme": [ 00:16:47.307 { 00:16:47.307 "pci_address": "0000:00:11.0", 00:16:47.307 "trid": { 00:16:47.307 "trtype": "PCIe", 00:16:47.307 "traddr": "0000:00:11.0" 00:16:47.307 }, 00:16:47.307 "ctrlr_data": { 00:16:47.307 "cntlid": 0, 00:16:47.307 "vendor_id": "0x1b36", 00:16:47.307 "model_number": "QEMU NVMe Ctrl", 00:16:47.307 "serial_number": "12341", 00:16:47.307 "firmware_revision": "8.0.0", 00:16:47.307 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:47.307 "oacs": { 00:16:47.307 "security": 0, 00:16:47.307 "format": 1, 00:16:47.307 "firmware": 0, 00:16:47.307 "ns_manage": 1 00:16:47.307 }, 00:16:47.307 "multi_ctrlr": false, 00:16:47.307 "ana_reporting": false 00:16:47.307 }, 00:16:47.307 "vs": { 00:16:47.307 "nvme_version": "1.4" 00:16:47.307 }, 00:16:47.307 "ns_data": { 00:16:47.307 "id": 1, 00:16:47.307 "can_share": false 00:16:47.307 } 00:16:47.307 } 00:16:47.307 ], 00:16:47.307 "mp_policy": "active_passive" 00:16:47.307 } 00:16:47.307 } 00:16:47.307 ]' 00:16:47.307 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:47.307 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:47.307 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:47.568 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:47.568 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:47.568 14:53:32 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:16:47.568 14:53:32 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:47.569 14:53:32 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:47.569 14:53:32 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:47.569 14:53:32 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:47.569 14:53:32 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:47.569 14:53:33 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=eb803f8c-0184-400c-abd1-8e431638c260 00:16:47.569 14:53:33 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:47.569 14:53:33 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb803f8c-0184-400c-abd1-8e431638c260 00:16:47.830 14:53:33 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:48.092 14:53:33 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=253810e9-bd76-4310-838e-2bc97feea287 00:16:48.092 14:53:33 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 253810e9-bd76-4310-838e-2bc97feea287 00:16:48.354 14:53:33 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=29026961-4d8f-44f7-81a2-008091807dd6 00:16:48.354 14:53:33 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 29026961-4d8f-44f7-81a2-008091807dd6 00:16:48.354 14:53:33 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:48.354 14:53:33 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:48.354 14:53:33 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=29026961-4d8f-44f7-81a2-008091807dd6 00:16:48.354 14:53:33 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:48.354 14:53:33 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 29026961-4d8f-44f7-81a2-008091807dd6 00:16:48.354 14:53:33 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=29026961-4d8f-44f7-81a2-008091807dd6 00:16:48.354 14:53:33 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:48.354 14:53:33 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:48.354 14:53:33 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:48.354 14:53:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29026961-4d8f-44f7-81a2-008091807dd6 00:16:48.616 14:53:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:48.616 { 00:16:48.616 "name": "29026961-4d8f-44f7-81a2-008091807dd6", 00:16:48.616 "aliases": [ 00:16:48.616 "lvs/nvme0n1p0" 00:16:48.616 ], 00:16:48.616 "product_name": "Logical Volume", 00:16:48.616 "block_size": 4096, 00:16:48.616 "num_blocks": 26476544, 00:16:48.616 "uuid": "29026961-4d8f-44f7-81a2-008091807dd6", 00:16:48.616 "assigned_rate_limits": { 00:16:48.616 "rw_ios_per_sec": 0, 00:16:48.616 "rw_mbytes_per_sec": 0, 00:16:48.616 "r_mbytes_per_sec": 0, 00:16:48.616 "w_mbytes_per_sec": 0 00:16:48.616 }, 00:16:48.616 "claimed": false, 00:16:48.616 "zoned": false, 00:16:48.616 "supported_io_types": { 00:16:48.616 "read": true, 00:16:48.616 "write": true, 00:16:48.616 "unmap": true, 00:16:48.616 "flush": false, 00:16:48.616 "reset": true, 00:16:48.616 "nvme_admin": false, 00:16:48.616 "nvme_io": false, 00:16:48.616 "nvme_io_md": false, 00:16:48.616 "write_zeroes": true, 00:16:48.616 "zcopy": false, 00:16:48.616 "get_zone_info": false, 00:16:48.616 "zone_management": false, 00:16:48.616 "zone_append": false, 00:16:48.616 "compare": false, 00:16:48.616 "compare_and_write": false, 00:16:48.616 "abort": false, 00:16:48.616 "seek_hole": true, 00:16:48.616 "seek_data": true, 00:16:48.616 "copy": false, 00:16:48.616 "nvme_iov_md": false 00:16:48.616 }, 00:16:48.616 "driver_specific": { 00:16:48.616 "lvol": { 00:16:48.616 "lvol_store_uuid": "253810e9-bd76-4310-838e-2bc97feea287", 00:16:48.616 "base_bdev": "nvme0n1", 00:16:48.616 "thin_provision": true, 00:16:48.616 "num_allocated_clusters": 0, 00:16:48.616 "snapshot": false, 00:16:48.616 "clone": false, 00:16:48.616 "esnap_clone": false 00:16:48.616 } 00:16:48.616 } 00:16:48.616 } 00:16:48.616 ]' 00:16:48.616 14:53:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:48.616 14:53:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:48.616 14:53:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:48.616 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:48.616 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:48.616 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:48.617 14:53:34 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:48.617 14:53:34 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:48.617 14:53:34 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:48.878 14:53:34 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:48.878 14:53:34 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:48.878 14:53:34 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 29026961-4d8f-44f7-81a2-008091807dd6 00:16:48.878 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=29026961-4d8f-44f7-81a2-008091807dd6 00:16:48.878 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:48.878 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:48.878 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:48.878 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29026961-4d8f-44f7-81a2-008091807dd6 00:16:49.139 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:49.139 { 00:16:49.139 "name": "29026961-4d8f-44f7-81a2-008091807dd6", 00:16:49.139 "aliases": [ 00:16:49.139 "lvs/nvme0n1p0" 00:16:49.139 ], 00:16:49.139 "product_name": "Logical Volume", 00:16:49.139 "block_size": 4096, 00:16:49.139 "num_blocks": 26476544, 00:16:49.139 "uuid": "29026961-4d8f-44f7-81a2-008091807dd6", 00:16:49.139 "assigned_rate_limits": { 00:16:49.139 "rw_ios_per_sec": 0, 00:16:49.139 "rw_mbytes_per_sec": 0, 00:16:49.139 "r_mbytes_per_sec": 0, 00:16:49.139 "w_mbytes_per_sec": 0 00:16:49.139 }, 00:16:49.139 "claimed": false, 00:16:49.139 "zoned": false, 00:16:49.139 "supported_io_types": { 00:16:49.139 "read": true, 00:16:49.139 "write": true, 00:16:49.139 "unmap": true, 00:16:49.139 "flush": false, 00:16:49.139 "reset": true, 00:16:49.139 "nvme_admin": false, 00:16:49.139 "nvme_io": false, 00:16:49.139 "nvme_io_md": false, 00:16:49.139 "write_zeroes": true, 00:16:49.139 "zcopy": false, 00:16:49.139 "get_zone_info": false, 00:16:49.139 "zone_management": false, 00:16:49.139 "zone_append": false, 00:16:49.139 "compare": false, 00:16:49.139 "compare_and_write": false, 00:16:49.139 "abort": false, 00:16:49.139 "seek_hole": true, 00:16:49.139 "seek_data": true, 00:16:49.139 "copy": false, 00:16:49.139 "nvme_iov_md": false 00:16:49.139 }, 00:16:49.139 "driver_specific": { 00:16:49.139 "lvol": { 00:16:49.139 "lvol_store_uuid": "253810e9-bd76-4310-838e-2bc97feea287", 00:16:49.139 "base_bdev": "nvme0n1", 00:16:49.139 "thin_provision": true, 00:16:49.139 "num_allocated_clusters": 0, 00:16:49.139 "snapshot": false, 00:16:49.139 "clone": false, 00:16:49.139 "esnap_clone": false 00:16:49.139 } 00:16:49.139 } 00:16:49.139 } 00:16:49.139 ]' 00:16:49.139 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:49.139 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:49.139 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:49.139 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:49.139 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:49.139 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:49.139 14:53:34 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:49.139 14:53:34 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:49.400 14:53:34 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:49.400 14:53:34 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:49.400 14:53:34 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 29026961-4d8f-44f7-81a2-008091807dd6 00:16:49.401 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=29026961-4d8f-44f7-81a2-008091807dd6 00:16:49.401 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:49.401 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:49.401 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:49.401 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29026961-4d8f-44f7-81a2-008091807dd6 00:16:49.663 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:49.663 { 00:16:49.663 "name": "29026961-4d8f-44f7-81a2-008091807dd6", 00:16:49.663 "aliases": [ 00:16:49.663 "lvs/nvme0n1p0" 00:16:49.663 ], 00:16:49.663 "product_name": "Logical Volume", 00:16:49.663 "block_size": 4096, 00:16:49.663 "num_blocks": 26476544, 00:16:49.663 "uuid": "29026961-4d8f-44f7-81a2-008091807dd6", 00:16:49.663 "assigned_rate_limits": { 00:16:49.663 "rw_ios_per_sec": 0, 00:16:49.663 "rw_mbytes_per_sec": 0, 00:16:49.663 "r_mbytes_per_sec": 0, 00:16:49.663 "w_mbytes_per_sec": 0 00:16:49.663 }, 00:16:49.663 "claimed": false, 00:16:49.663 "zoned": false, 00:16:49.663 "supported_io_types": { 00:16:49.663 "read": true, 00:16:49.663 "write": true, 00:16:49.663 "unmap": true, 00:16:49.663 "flush": false, 00:16:49.663 "reset": true, 00:16:49.663 "nvme_admin": false, 00:16:49.663 "nvme_io": false, 00:16:49.663 "nvme_io_md": false, 00:16:49.663 "write_zeroes": true, 00:16:49.663 "zcopy": false, 00:16:49.663 "get_zone_info": false, 00:16:49.663 "zone_management": false, 00:16:49.663 "zone_append": false, 00:16:49.663 "compare": false, 00:16:49.663 "compare_and_write": false, 00:16:49.663 "abort": false, 00:16:49.663 "seek_hole": true, 00:16:49.663 "seek_data": true, 00:16:49.663 "copy": false, 00:16:49.663 "nvme_iov_md": false 00:16:49.663 }, 00:16:49.663 "driver_specific": { 00:16:49.663 "lvol": { 00:16:49.663 "lvol_store_uuid": "253810e9-bd76-4310-838e-2bc97feea287", 00:16:49.663 "base_bdev": "nvme0n1", 00:16:49.663 "thin_provision": true, 00:16:49.663 "num_allocated_clusters": 0, 00:16:49.663 "snapshot": false, 00:16:49.663 "clone": false, 00:16:49.663 "esnap_clone": false 00:16:49.663 } 00:16:49.663 } 00:16:49.663 } 00:16:49.663 ]' 00:16:49.663 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:49.663 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:49.663 14:53:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:49.663 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:49.663 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:49.663 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:49.663 14:53:35 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:49.664 14:53:35 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 29026961-4d8f-44f7-81a2-008091807dd6 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:49.664 [2024-11-17 14:53:35.178602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.664 [2024-11-17 14:53:35.178634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:49.664 [2024-11-17 14:53:35.178647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:49.664 [2024-11-17 14:53:35.178655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.664 [2024-11-17 14:53:35.180882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.664 [2024-11-17 14:53:35.180999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.664 [2024-11-17 14:53:35.181017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.210 ms 00:16:49.664 [2024-11-17 14:53:35.181023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.664 [2024-11-17 14:53:35.181092] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:49.664 [2024-11-17 14:53:35.181624] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:49.664 [2024-11-17 14:53:35.181642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.664 [2024-11-17 14:53:35.181650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.664 [2024-11-17 14:53:35.181658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:16:49.664 [2024-11-17 14:53:35.181664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.664 [2024-11-17 14:53:35.181746] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 89855542-54d3-43a5-bee9-bade0b1117b4 00:16:49.664 [2024-11-17 14:53:35.182750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.664 [2024-11-17 14:53:35.182777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:49.664 [2024-11-17 14:53:35.182785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:49.664 [2024-11-17 14:53:35.182794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.664 [2024-11-17 14:53:35.188122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.664 [2024-11-17 14:53:35.188206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.664 [2024-11-17 14:53:35.188251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.280 ms 00:16:49.664 [2024-11-17 14:53:35.188270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.664 [2024-11-17 14:53:35.188372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.664 [2024-11-17 14:53:35.188418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.664 [2024-11-17 14:53:35.188470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:49.664 [2024-11-17 14:53:35.188490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.664 [2024-11-17 14:53:35.188523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.664 [2024-11-17 14:53:35.188542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:49.664 [2024-11-17 14:53:35.188557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:49.664 [2024-11-17 14:53:35.188574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.664 [2024-11-17 14:53:35.188607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:49.664 [2024-11-17 14:53:35.191615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.664 [2024-11-17 14:53:35.191702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.664 [2024-11-17 14:53:35.191754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:16:49.664 [2024-11-17 14:53:35.191773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.664 [2024-11-17 14:53:35.191823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.664 [2024-11-17 14:53:35.191893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:49.664 [2024-11-17 14:53:35.191947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:49.664 [2024-11-17 14:53:35.191974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.664 [2024-11-17 14:53:35.192008] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:49.664 [2024-11-17 14:53:35.192127] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:49.664 [2024-11-17 14:53:35.192163] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:49.664 [2024-11-17 14:53:35.192189] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:49.664 [2024-11-17 14:53:35.192252] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:49.664 [2024-11-17 14:53:35.192282] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:49.664 [2024-11-17 14:53:35.192308] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:49.664 [2024-11-17 14:53:35.192357] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:49.664 [2024-11-17 14:53:35.192377] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:49.664 [2024-11-17 14:53:35.192394] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:49.664 [2024-11-17 14:53:35.192411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.664 [2024-11-17 14:53:35.192426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:49.664 [2024-11-17 14:53:35.192472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:16:49.664 [2024-11-17 14:53:35.192490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.664 [2024-11-17 14:53:35.192585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.664 [2024-11-17 14:53:35.192609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:49.664 [2024-11-17 14:53:35.192627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:49.664 [2024-11-17 14:53:35.192671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.664 [2024-11-17 14:53:35.192785] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:49.664 [2024-11-17 14:53:35.192828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:49.664 [2024-11-17 14:53:35.192848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:49.664 [2024-11-17 14:53:35.192864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.664 [2024-11-17 14:53:35.192880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:49.664 [2024-11-17 14:53:35.192895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:49.664 [2024-11-17 14:53:35.192991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:49.664 [2024-11-17 14:53:35.193011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:49.664 [2024-11-17 14:53:35.193027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:49.664 [2024-11-17 14:53:35.193042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:49.664 [2024-11-17 14:53:35.193059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:49.664 [2024-11-17 14:53:35.193074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:49.664 [2024-11-17 14:53:35.193164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:49.664 [2024-11-17 14:53:35.193182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:49.664 [2024-11-17 14:53:35.193199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:49.664 [2024-11-17 14:53:35.193214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.664 [2024-11-17 14:53:35.193231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:49.664 [2024-11-17 14:53:35.193275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:49.664 [2024-11-17 14:53:35.193295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.664 [2024-11-17 14:53:35.193310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:49.664 [2024-11-17 14:53:35.193327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:49.664 [2024-11-17 14:53:35.193342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:49.664 [2024-11-17 14:53:35.193381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:49.664 [2024-11-17 14:53:35.193399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:49.664 [2024-11-17 14:53:35.193465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:49.664 [2024-11-17 14:53:35.193483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:49.664 [2024-11-17 14:53:35.193499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:49.664 [2024-11-17 14:53:35.193513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:49.664 [2024-11-17 14:53:35.193529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:49.664 [2024-11-17 14:53:35.193544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:49.664 [2024-11-17 14:53:35.193588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:49.664 [2024-11-17 14:53:35.193605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:49.664 [2024-11-17 14:53:35.193623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:49.664 [2024-11-17 14:53:35.193638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:49.664 [2024-11-17 14:53:35.193654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:49.664 [2024-11-17 14:53:35.193668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:49.665 [2024-11-17 14:53:35.193710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:49.665 [2024-11-17 14:53:35.193728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:49.665 [2024-11-17 14:53:35.193743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:49.665 [2024-11-17 14:53:35.193758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.665 [2024-11-17 14:53:35.193773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:49.665 [2024-11-17 14:53:35.193788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:49.665 [2024-11-17 14:53:35.193830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.665 [2024-11-17 14:53:35.193846] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:49.665 [2024-11-17 14:53:35.193864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:49.665 [2024-11-17 14:53:35.193879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:49.665 [2024-11-17 14:53:35.193896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.665 [2024-11-17 14:53:35.193911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:49.665 [2024-11-17 14:53:35.193947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:49.665 [2024-11-17 14:53:35.193985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:49.665 [2024-11-17 14:53:35.194001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:49.665 [2024-11-17 14:53:35.194016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:49.665 [2024-11-17 14:53:35.194032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:49.665 [2024-11-17 14:53:35.194050] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:49.665 [2024-11-17 14:53:35.194104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:49.665 [2024-11-17 14:53:35.194152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:49.665 [2024-11-17 14:53:35.194196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:49.665 [2024-11-17 14:53:35.194221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:49.665 [2024-11-17 14:53:35.194245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:49.665 [2024-11-17 14:53:35.194267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:49.665 [2024-11-17 14:53:35.194345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:49.665 [2024-11-17 14:53:35.194368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:49.665 [2024-11-17 14:53:35.194392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:49.665 [2024-11-17 14:53:35.194415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:49.665 [2024-11-17 14:53:35.194469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:49.665 [2024-11-17 14:53:35.194498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:49.665 [2024-11-17 14:53:35.194524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:49.665 [2024-11-17 14:53:35.194546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:49.665 [2024-11-17 14:53:35.194601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:49.665 [2024-11-17 14:53:35.194749] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:49.665 [2024-11-17 14:53:35.194785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:49.665 [2024-11-17 14:53:35.194831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:49.665 [2024-11-17 14:53:35.194857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:49.665 [2024-11-17 14:53:35.194908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:49.665 [2024-11-17 14:53:35.194936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:49.665 [2024-11-17 14:53:35.194943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.665 [2024-11-17 14:53:35.194952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:49.665 [2024-11-17 14:53:35.194959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.212 ms 00:16:49.665 [2024-11-17 14:53:35.194967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.665 [2024-11-17 14:53:35.195014] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:49.665 [2024-11-17 14:53:35.195025] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:52.206 [2024-11-17 14:53:37.475877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.476105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:52.206 [2024-11-17 14:53:37.476174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2280.852 ms 00:16:52.206 [2024-11-17 14:53:37.476203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.501071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.501228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:52.206 [2024-11-17 14:53:37.501286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.624 ms 00:16:52.206 [2024-11-17 14:53:37.501312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.501450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.501623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:52.206 [2024-11-17 14:53:37.501654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:52.206 [2024-11-17 14:53:37.501726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.541581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.541740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:52.206 [2024-11-17 14:53:37.541805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.796 ms 00:16:52.206 [2024-11-17 14:53:37.541833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.541942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.541997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:52.206 [2024-11-17 14:53:37.542019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:52.206 [2024-11-17 14:53:37.542084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.542403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.542503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:52.206 [2024-11-17 14:53:37.542559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:16:52.206 [2024-11-17 14:53:37.542584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.542739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.542799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:52.206 [2024-11-17 14:53:37.542879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:52.206 [2024-11-17 14:53:37.542906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.559277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.559376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:52.206 [2024-11-17 14:53:37.559425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.309 ms 00:16:52.206 [2024-11-17 14:53:37.559450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.570740] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:52.206 [2024-11-17 14:53:37.584558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.584661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:52.206 [2024-11-17 14:53:37.584710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.969 ms 00:16:52.206 [2024-11-17 14:53:37.584733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.649393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.649537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:52.206 [2024-11-17 14:53:37.649598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.585 ms 00:16:52.206 [2024-11-17 14:53:37.649627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.649853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.650008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:52.206 [2024-11-17 14:53:37.650041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:16:52.206 [2024-11-17 14:53:37.650061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.673393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.673497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:52.206 [2024-11-17 14:53:37.673516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.288 ms 00:16:52.206 [2024-11-17 14:53:37.673524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.696098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.696124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:52.206 [2024-11-17 14:53:37.696136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.530 ms 00:16:52.206 [2024-11-17 14:53:37.696144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.206 [2024-11-17 14:53:37.696702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.206 [2024-11-17 14:53:37.696716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:52.206 [2024-11-17 14:53:37.696727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:16:52.206 [2024-11-17 14:53:37.696734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.467 [2024-11-17 14:53:37.762598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.467 [2024-11-17 14:53:37.762627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:52.467 [2024-11-17 14:53:37.762644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.837 ms 00:16:52.467 [2024-11-17 14:53:37.762652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.467 [2024-11-17 14:53:37.786533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.467 [2024-11-17 14:53:37.786560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:52.467 [2024-11-17 14:53:37.786573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.800 ms 00:16:52.467 [2024-11-17 14:53:37.786581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.468 [2024-11-17 14:53:37.809119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.468 [2024-11-17 14:53:37.809148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:52.468 [2024-11-17 14:53:37.809160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.482 ms 00:16:52.468 [2024-11-17 14:53:37.809168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.468 [2024-11-17 14:53:37.831696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.468 [2024-11-17 14:53:37.831725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:52.468 [2024-11-17 14:53:37.831738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.459 ms 00:16:52.468 [2024-11-17 14:53:37.831757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.468 [2024-11-17 14:53:37.831818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.468 [2024-11-17 14:53:37.831830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:52.468 [2024-11-17 14:53:37.831842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:52.468 [2024-11-17 14:53:37.831850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.468 [2024-11-17 14:53:37.831941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.468 [2024-11-17 14:53:37.831951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:52.468 [2024-11-17 14:53:37.831960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:52.468 [2024-11-17 14:53:37.831968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.468 [2024-11-17 14:53:37.832699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:52.468 [2024-11-17 14:53:37.835636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2653.810 ms, result 0 00:16:52.468 [2024-11-17 14:53:37.836291] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:52.468 { 00:16:52.468 "name": "ftl0", 00:16:52.468 "uuid": "89855542-54d3-43a5-bee9-bade0b1117b4" 00:16:52.468 } 00:16:52.468 14:53:37 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:52.468 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:16:52.468 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.468 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:16:52.468 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.468 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.468 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:52.728 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:52.728 [ 00:16:52.728 { 00:16:52.728 "name": "ftl0", 00:16:52.728 "aliases": [ 00:16:52.728 "89855542-54d3-43a5-bee9-bade0b1117b4" 00:16:52.728 ], 00:16:52.728 "product_name": "FTL disk", 00:16:52.728 "block_size": 4096, 00:16:52.728 "num_blocks": 23592960, 00:16:52.728 "uuid": "89855542-54d3-43a5-bee9-bade0b1117b4", 00:16:52.728 "assigned_rate_limits": { 00:16:52.728 "rw_ios_per_sec": 0, 00:16:52.728 "rw_mbytes_per_sec": 0, 00:16:52.728 "r_mbytes_per_sec": 0, 00:16:52.728 "w_mbytes_per_sec": 0 00:16:52.728 }, 00:16:52.728 "claimed": false, 00:16:52.728 "zoned": false, 00:16:52.728 "supported_io_types": { 00:16:52.728 "read": true, 00:16:52.728 "write": true, 00:16:52.728 "unmap": true, 00:16:52.728 "flush": true, 00:16:52.728 "reset": false, 00:16:52.728 "nvme_admin": false, 00:16:52.728 "nvme_io": false, 00:16:52.728 "nvme_io_md": false, 00:16:52.728 "write_zeroes": true, 00:16:52.728 "zcopy": false, 00:16:52.728 "get_zone_info": false, 00:16:52.728 "zone_management": false, 00:16:52.728 "zone_append": false, 00:16:52.728 "compare": false, 00:16:52.728 "compare_and_write": false, 00:16:52.728 "abort": false, 00:16:52.728 "seek_hole": false, 00:16:52.728 "seek_data": false, 00:16:52.729 "copy": false, 00:16:52.729 "nvme_iov_md": false 00:16:52.729 }, 00:16:52.729 "driver_specific": { 00:16:52.729 "ftl": { 00:16:52.729 "base_bdev": "29026961-4d8f-44f7-81a2-008091807dd6", 00:16:52.729 "cache": "nvc0n1p0" 00:16:52.729 } 00:16:52.729 } 00:16:52.729 } 00:16:52.729 ] 00:16:52.729 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:16:52.729 14:53:38 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:52.729 14:53:38 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:52.988 14:53:38 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:52.988 14:53:38 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:53.272 14:53:38 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:53.272 { 00:16:53.272 "name": "ftl0", 00:16:53.272 "aliases": [ 00:16:53.272 "89855542-54d3-43a5-bee9-bade0b1117b4" 00:16:53.272 ], 00:16:53.272 "product_name": "FTL disk", 00:16:53.272 "block_size": 4096, 00:16:53.272 "num_blocks": 23592960, 00:16:53.272 "uuid": "89855542-54d3-43a5-bee9-bade0b1117b4", 00:16:53.272 "assigned_rate_limits": { 00:16:53.272 "rw_ios_per_sec": 0, 00:16:53.272 "rw_mbytes_per_sec": 0, 00:16:53.272 "r_mbytes_per_sec": 0, 00:16:53.272 "w_mbytes_per_sec": 0 00:16:53.272 }, 00:16:53.272 "claimed": false, 00:16:53.272 "zoned": false, 00:16:53.272 "supported_io_types": { 00:16:53.272 "read": true, 00:16:53.272 "write": true, 00:16:53.272 "unmap": true, 00:16:53.272 "flush": true, 00:16:53.272 "reset": false, 00:16:53.272 "nvme_admin": false, 00:16:53.272 "nvme_io": false, 00:16:53.272 "nvme_io_md": false, 00:16:53.272 "write_zeroes": true, 00:16:53.272 "zcopy": false, 00:16:53.272 "get_zone_info": false, 00:16:53.272 "zone_management": false, 00:16:53.272 "zone_append": false, 00:16:53.272 "compare": false, 00:16:53.272 "compare_and_write": false, 00:16:53.272 "abort": false, 00:16:53.272 "seek_hole": false, 00:16:53.272 "seek_data": false, 00:16:53.272 "copy": false, 00:16:53.272 "nvme_iov_md": false 00:16:53.272 }, 00:16:53.272 "driver_specific": { 00:16:53.272 "ftl": { 00:16:53.272 "base_bdev": "29026961-4d8f-44f7-81a2-008091807dd6", 00:16:53.272 "cache": "nvc0n1p0" 00:16:53.272 } 00:16:53.272 } 00:16:53.272 } 00:16:53.272 ]' 00:16:53.272 14:53:38 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:53.272 14:53:38 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:53.272 14:53:38 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:53.602 [2024-11-17 14:53:38.851419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.602 [2024-11-17 14:53:38.851461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:53.602 [2024-11-17 14:53:38.851477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:53.602 [2024-11-17 14:53:38.851489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.602 [2024-11-17 14:53:38.851530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:53.602 [2024-11-17 14:53:38.854143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.602 [2024-11-17 14:53:38.854263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:53.602 [2024-11-17 14:53:38.854286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:16:53.602 [2024-11-17 14:53:38.854294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.602 [2024-11-17 14:53:38.854897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.602 [2024-11-17 14:53:38.854910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:53.602 [2024-11-17 14:53:38.854932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:16:53.602 [2024-11-17 14:53:38.854940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.602 [2024-11-17 14:53:38.858593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.602 [2024-11-17 14:53:38.858614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:53.602 [2024-11-17 14:53:38.858625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.616 ms 00:16:53.602 [2024-11-17 14:53:38.858634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.602 [2024-11-17 14:53:38.865556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.602 [2024-11-17 14:53:38.865660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:53.603 [2024-11-17 14:53:38.865678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.859 ms 00:16:53.603 [2024-11-17 14:53:38.865686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.603 [2024-11-17 14:53:38.889867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.603 [2024-11-17 14:53:38.889897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:53.603 [2024-11-17 14:53:38.889912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.106 ms 00:16:53.603 [2024-11-17 14:53:38.889932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.603 [2024-11-17 14:53:38.905747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.603 [2024-11-17 14:53:38.905779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:53.603 [2024-11-17 14:53:38.905793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.752 ms 00:16:53.603 [2024-11-17 14:53:38.905803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.603 [2024-11-17 14:53:38.906040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.603 [2024-11-17 14:53:38.906051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:53.603 [2024-11-17 14:53:38.906061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:16:53.603 [2024-11-17 14:53:38.906068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.603 [2024-11-17 14:53:38.929462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.603 [2024-11-17 14:53:38.929492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:53.603 [2024-11-17 14:53:38.929503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.353 ms 00:16:53.603 [2024-11-17 14:53:38.929510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.603 [2024-11-17 14:53:38.952937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.603 [2024-11-17 14:53:38.952967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:53.603 [2024-11-17 14:53:38.952981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.365 ms 00:16:53.603 [2024-11-17 14:53:38.952988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.603 [2024-11-17 14:53:38.975879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.603 [2024-11-17 14:53:38.975909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:53.603 [2024-11-17 14:53:38.975934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.818 ms 00:16:53.603 [2024-11-17 14:53:38.975941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.603 [2024-11-17 14:53:38.998759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.603 [2024-11-17 14:53:38.998795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:53.603 [2024-11-17 14:53:38.998807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.700 ms 00:16:53.603 [2024-11-17 14:53:38.998813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.603 [2024-11-17 14:53:38.998873] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:53.603 [2024-11-17 14:53:38.998887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.998898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.998906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.998915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.998939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.998950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.998958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.998967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.998974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.998984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.998991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:53.603 [2024-11-17 14:53:38.999465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:53.604 [2024-11-17 14:53:38.999787] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:53.604 [2024-11-17 14:53:38.999798] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 89855542-54d3-43a5-bee9-bade0b1117b4 00:16:53.604 [2024-11-17 14:53:38.999806] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:53.604 [2024-11-17 14:53:38.999814] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:53.604 [2024-11-17 14:53:38.999822] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:53.604 [2024-11-17 14:53:38.999831] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:53.604 [2024-11-17 14:53:38.999840] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:53.604 [2024-11-17 14:53:38.999849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:53.604 [2024-11-17 14:53:38.999856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:53.604 [2024-11-17 14:53:38.999864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:53.604 [2024-11-17 14:53:38.999870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:53.604 [2024-11-17 14:53:38.999878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.604 [2024-11-17 14:53:38.999886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:53.604 [2024-11-17 14:53:38.999896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:16:53.604 [2024-11-17 14:53:38.999903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.604 [2024-11-17 14:53:39.012389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.604 [2024-11-17 14:53:39.012417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:53.604 [2024-11-17 14:53:39.012432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.430 ms 00:16:53.604 [2024-11-17 14:53:39.012440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.604 [2024-11-17 14:53:39.012817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.604 [2024-11-17 14:53:39.012827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:53.604 [2024-11-17 14:53:39.012837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:16:53.604 [2024-11-17 14:53:39.012845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.604 [2024-11-17 14:53:39.056681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.604 [2024-11-17 14:53:39.056717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:53.604 [2024-11-17 14:53:39.056729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.604 [2024-11-17 14:53:39.056737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.604 [2024-11-17 14:53:39.056841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.604 [2024-11-17 14:53:39.056851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:53.604 [2024-11-17 14:53:39.056860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.604 [2024-11-17 14:53:39.056867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.604 [2024-11-17 14:53:39.056955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.604 [2024-11-17 14:53:39.056965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:53.604 [2024-11-17 14:53:39.056979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.604 [2024-11-17 14:53:39.056986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.604 [2024-11-17 14:53:39.057016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.604 [2024-11-17 14:53:39.057024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:53.604 [2024-11-17 14:53:39.057033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.604 [2024-11-17 14:53:39.057041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.889 [2024-11-17 14:53:39.139100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.889 [2024-11-17 14:53:39.139139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:53.889 [2024-11-17 14:53:39.139151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.889 [2024-11-17 14:53:39.139159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.889 [2024-11-17 14:53:39.202164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.890 [2024-11-17 14:53:39.202301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:53.890 [2024-11-17 14:53:39.202320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.890 [2024-11-17 14:53:39.202328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.890 [2024-11-17 14:53:39.202417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.890 [2024-11-17 14:53:39.202426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:53.890 [2024-11-17 14:53:39.202451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.890 [2024-11-17 14:53:39.202461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.890 [2024-11-17 14:53:39.202511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.890 [2024-11-17 14:53:39.202519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:53.890 [2024-11-17 14:53:39.202528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.890 [2024-11-17 14:53:39.202535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.890 [2024-11-17 14:53:39.202646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.890 [2024-11-17 14:53:39.202655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:53.890 [2024-11-17 14:53:39.202665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.890 [2024-11-17 14:53:39.202672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.890 [2024-11-17 14:53:39.202728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.890 [2024-11-17 14:53:39.202736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:53.890 [2024-11-17 14:53:39.202745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.890 [2024-11-17 14:53:39.202752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.890 [2024-11-17 14:53:39.202803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.890 [2024-11-17 14:53:39.202812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:53.890 [2024-11-17 14:53:39.202823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.890 [2024-11-17 14:53:39.202829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.890 [2024-11-17 14:53:39.202891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.890 [2024-11-17 14:53:39.202900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:53.890 [2024-11-17 14:53:39.202909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.890 [2024-11-17 14:53:39.202917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.890 [2024-11-17 14:53:39.203134] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 351.703 ms, result 0 00:16:53.890 true 00:16:53.890 14:53:39 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73485 00:16:53.890 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73485 ']' 00:16:53.890 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73485 00:16:53.890 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:53.890 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.890 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73485 00:16:53.890 killing process with pid 73485 00:16:53.890 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.890 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.890 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73485' 00:16:53.890 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73485 00:16:53.890 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73485 00:17:00.454 14:53:45 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:01.022 65536+0 records in 00:17:01.022 65536+0 records out 00:17:01.022 268435456 bytes (268 MB, 256 MiB) copied, 1.08477 s, 247 MB/s 00:17:01.022 14:53:46 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:01.022 [2024-11-17 14:53:46.439462] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:01.022 [2024-11-17 14:53:46.439900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73665 ] 00:17:01.281 [2024-11-17 14:53:46.601327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.281 [2024-11-17 14:53:46.695556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.542 [2024-11-17 14:53:46.949021] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.542 [2024-11-17 14:53:46.949081] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.804 [2024-11-17 14:53:47.109075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.804 [2024-11-17 14:53:47.109134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:01.804 [2024-11-17 14:53:47.109149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:01.804 [2024-11-17 14:53:47.109158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.804 [2024-11-17 14:53:47.112256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.804 [2024-11-17 14:53:47.112307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.804 [2024-11-17 14:53:47.112320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.077 ms 00:17:01.804 [2024-11-17 14:53:47.112328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.804 [2024-11-17 14:53:47.112449] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:01.804 [2024-11-17 14:53:47.113458] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:01.804 [2024-11-17 14:53:47.113520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.804 [2024-11-17 14:53:47.113532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.804 [2024-11-17 14:53:47.113543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:17:01.804 [2024-11-17 14:53:47.113551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.804 [2024-11-17 14:53:47.115360] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:01.804 [2024-11-17 14:53:47.128607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.804 [2024-11-17 14:53:47.128639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:01.804 [2024-11-17 14:53:47.128650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.249 ms 00:17:01.804 [2024-11-17 14:53:47.128658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.804 [2024-11-17 14:53:47.128740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.804 [2024-11-17 14:53:47.128752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:01.804 [2024-11-17 14:53:47.128760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:01.804 [2024-11-17 14:53:47.128768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.804 [2024-11-17 14:53:47.133644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.804 [2024-11-17 14:53:47.133667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.804 [2024-11-17 14:53:47.133676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.837 ms 00:17:01.804 [2024-11-17 14:53:47.133683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.804 [2024-11-17 14:53:47.133765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.804 [2024-11-17 14:53:47.133774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.804 [2024-11-17 14:53:47.133782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:01.804 [2024-11-17 14:53:47.133789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.804 [2024-11-17 14:53:47.133813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.804 [2024-11-17 14:53:47.133823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:01.804 [2024-11-17 14:53:47.133831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:01.804 [2024-11-17 14:53:47.133838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.804 [2024-11-17 14:53:47.133858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:01.804 [2024-11-17 14:53:47.137119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.804 [2024-11-17 14:53:47.137140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.804 [2024-11-17 14:53:47.137149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.265 ms 00:17:01.804 [2024-11-17 14:53:47.137156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.804 [2024-11-17 14:53:47.137188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.804 [2024-11-17 14:53:47.137196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:01.804 [2024-11-17 14:53:47.137204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:01.804 [2024-11-17 14:53:47.137211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.804 [2024-11-17 14:53:47.137227] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:01.804 [2024-11-17 14:53:47.137247] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:01.804 [2024-11-17 14:53:47.137280] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:01.804 [2024-11-17 14:53:47.137295] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:01.804 [2024-11-17 14:53:47.137396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:01.804 [2024-11-17 14:53:47.137406] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:01.804 [2024-11-17 14:53:47.137416] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:01.804 [2024-11-17 14:53:47.137426] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:01.804 [2024-11-17 14:53:47.137437] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:01.804 [2024-11-17 14:53:47.137445] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:01.804 [2024-11-17 14:53:47.137452] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:01.804 [2024-11-17 14:53:47.137459] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:01.804 [2024-11-17 14:53:47.137467] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:01.804 [2024-11-17 14:53:47.137474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.804 [2024-11-17 14:53:47.137482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:01.804 [2024-11-17 14:53:47.137489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:17:01.804 [2024-11-17 14:53:47.137496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.804 [2024-11-17 14:53:47.137582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.804 [2024-11-17 14:53:47.137590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:01.804 [2024-11-17 14:53:47.137600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:01.804 [2024-11-17 14:53:47.137606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.804 [2024-11-17 14:53:47.137702] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:01.804 [2024-11-17 14:53:47.137711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:01.804 [2024-11-17 14:53:47.137718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.804 [2024-11-17 14:53:47.137726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.804 [2024-11-17 14:53:47.137733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:01.804 [2024-11-17 14:53:47.137740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:01.804 [2024-11-17 14:53:47.137746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:01.804 [2024-11-17 14:53:47.137754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:01.804 [2024-11-17 14:53:47.137761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:01.804 [2024-11-17 14:53:47.137767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.805 [2024-11-17 14:53:47.137774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:01.805 [2024-11-17 14:53:47.137780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:01.805 [2024-11-17 14:53:47.137786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.805 [2024-11-17 14:53:47.137798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:01.805 [2024-11-17 14:53:47.137805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:01.805 [2024-11-17 14:53:47.137813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.805 [2024-11-17 14:53:47.137820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:01.805 [2024-11-17 14:53:47.137826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:01.805 [2024-11-17 14:53:47.137833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.805 [2024-11-17 14:53:47.137839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:01.805 [2024-11-17 14:53:47.137845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:01.805 [2024-11-17 14:53:47.137852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.805 [2024-11-17 14:53:47.137858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:01.805 [2024-11-17 14:53:47.137865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:01.805 [2024-11-17 14:53:47.137871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.805 [2024-11-17 14:53:47.137878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:01.805 [2024-11-17 14:53:47.137884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:01.805 [2024-11-17 14:53:47.137890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.805 [2024-11-17 14:53:47.137896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:01.805 [2024-11-17 14:53:47.137903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:01.805 [2024-11-17 14:53:47.137909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.805 [2024-11-17 14:53:47.137916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:01.805 [2024-11-17 14:53:47.137941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:01.805 [2024-11-17 14:53:47.137947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.805 [2024-11-17 14:53:47.137954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:01.805 [2024-11-17 14:53:47.137961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:01.805 [2024-11-17 14:53:47.137967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.805 [2024-11-17 14:53:47.137974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:01.805 [2024-11-17 14:53:47.137980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:01.805 [2024-11-17 14:53:47.137986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.805 [2024-11-17 14:53:47.137993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:01.805 [2024-11-17 14:53:47.138000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:01.805 [2024-11-17 14:53:47.138007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.805 [2024-11-17 14:53:47.138013] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:01.805 [2024-11-17 14:53:47.138021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:01.805 [2024-11-17 14:53:47.138028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.805 [2024-11-17 14:53:47.138037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.805 [2024-11-17 14:53:47.138046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:01.805 [2024-11-17 14:53:47.138052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:01.805 [2024-11-17 14:53:47.138059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:01.805 [2024-11-17 14:53:47.138066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:01.805 [2024-11-17 14:53:47.138072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:01.805 [2024-11-17 14:53:47.138078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:01.805 [2024-11-17 14:53:47.138086] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:01.805 [2024-11-17 14:53:47.138095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.805 [2024-11-17 14:53:47.138103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:01.805 [2024-11-17 14:53:47.138110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:01.805 [2024-11-17 14:53:47.138117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:01.805 [2024-11-17 14:53:47.138124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:01.805 [2024-11-17 14:53:47.138132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:01.805 [2024-11-17 14:53:47.138139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:01.805 [2024-11-17 14:53:47.138146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:01.805 [2024-11-17 14:53:47.138153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:01.805 [2024-11-17 14:53:47.138160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:01.805 [2024-11-17 14:53:47.138168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:01.805 [2024-11-17 14:53:47.138175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:01.805 [2024-11-17 14:53:47.138182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:01.805 [2024-11-17 14:53:47.138189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:01.805 [2024-11-17 14:53:47.138197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:01.805 [2024-11-17 14:53:47.138204] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:01.805 [2024-11-17 14:53:47.138211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.805 [2024-11-17 14:53:47.138220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:01.805 [2024-11-17 14:53:47.138227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:01.805 [2024-11-17 14:53:47.138234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:01.805 [2024-11-17 14:53:47.138241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:01.805 [2024-11-17 14:53:47.138248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.805 [2024-11-17 14:53:47.138255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:01.805 [2024-11-17 14:53:47.138264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:17:01.805 [2024-11-17 14:53:47.138271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.805 [2024-11-17 14:53:47.164136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.805 [2024-11-17 14:53:47.164163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:01.805 [2024-11-17 14:53:47.164172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.803 ms 00:17:01.805 [2024-11-17 14:53:47.164180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.805 [2024-11-17 14:53:47.164289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.805 [2024-11-17 14:53:47.164302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:01.805 [2024-11-17 14:53:47.164310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:01.805 [2024-11-17 14:53:47.164317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.805 [2024-11-17 14:53:47.202297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.805 [2024-11-17 14:53:47.202334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:01.805 [2024-11-17 14:53:47.202345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.959 ms 00:17:01.805 [2024-11-17 14:53:47.202355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.805 [2024-11-17 14:53:47.202448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.805 [2024-11-17 14:53:47.202460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:01.805 [2024-11-17 14:53:47.202469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:01.805 [2024-11-17 14:53:47.202476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.805 [2024-11-17 14:53:47.202844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.805 [2024-11-17 14:53:47.202868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:01.805 [2024-11-17 14:53:47.202877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:17:01.805 [2024-11-17 14:53:47.202889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.805 [2024-11-17 14:53:47.203031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.805 [2024-11-17 14:53:47.203042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:01.805 [2024-11-17 14:53:47.203050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:17:01.805 [2024-11-17 14:53:47.203057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.805 [2024-11-17 14:53:47.216402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.805 [2024-11-17 14:53:47.216427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:01.805 [2024-11-17 14:53:47.216437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.325 ms 00:17:01.805 [2024-11-17 14:53:47.216445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.805 [2024-11-17 14:53:47.229129] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:01.805 [2024-11-17 14:53:47.229161] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:01.806 [2024-11-17 14:53:47.229172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.806 [2024-11-17 14:53:47.229180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:01.806 [2024-11-17 14:53:47.229188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.637 ms 00:17:01.806 [2024-11-17 14:53:47.229196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.806 [2024-11-17 14:53:47.253352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.806 [2024-11-17 14:53:47.253380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:01.806 [2024-11-17 14:53:47.253397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.090 ms 00:17:01.806 [2024-11-17 14:53:47.253404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.806 [2024-11-17 14:53:47.265116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.806 [2024-11-17 14:53:47.265142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:01.806 [2024-11-17 14:53:47.265152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.647 ms 00:17:01.806 [2024-11-17 14:53:47.265159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.806 [2024-11-17 14:53:47.276825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.806 [2024-11-17 14:53:47.276849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:01.806 [2024-11-17 14:53:47.276858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.607 ms 00:17:01.806 [2024-11-17 14:53:47.276865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.806 [2024-11-17 14:53:47.277474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.806 [2024-11-17 14:53:47.277493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:01.806 [2024-11-17 14:53:47.277503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:17:01.806 [2024-11-17 14:53:47.277510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.806 [2024-11-17 14:53:47.333010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.806 [2024-11-17 14:53:47.333052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:01.806 [2024-11-17 14:53:47.333064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.477 ms 00:17:01.806 [2024-11-17 14:53:47.333072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.806 [2024-11-17 14:53:47.343356] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:02.065 [2024-11-17 14:53:47.357651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.065 [2024-11-17 14:53:47.357685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:02.065 [2024-11-17 14:53:47.357696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.491 ms 00:17:02.065 [2024-11-17 14:53:47.357704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.065 [2024-11-17 14:53:47.357784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.065 [2024-11-17 14:53:47.357796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:02.065 [2024-11-17 14:53:47.357805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:02.065 [2024-11-17 14:53:47.357813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.065 [2024-11-17 14:53:47.357861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.065 [2024-11-17 14:53:47.357870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:02.065 [2024-11-17 14:53:47.357878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:02.065 [2024-11-17 14:53:47.357885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.065 [2024-11-17 14:53:47.357909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.065 [2024-11-17 14:53:47.357938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:02.065 [2024-11-17 14:53:47.357950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:02.065 [2024-11-17 14:53:47.357957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.066 [2024-11-17 14:53:47.357988] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:02.066 [2024-11-17 14:53:47.357998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.066 [2024-11-17 14:53:47.358006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:02.066 [2024-11-17 14:53:47.358014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:02.066 [2024-11-17 14:53:47.358021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.066 [2024-11-17 14:53:47.381815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.066 [2024-11-17 14:53:47.381851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:02.066 [2024-11-17 14:53:47.381862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.774 ms 00:17:02.066 [2024-11-17 14:53:47.381871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.066 [2024-11-17 14:53:47.381974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.066 [2024-11-17 14:53:47.381985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:02.066 [2024-11-17 14:53:47.381994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:02.066 [2024-11-17 14:53:47.382002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.066 [2024-11-17 14:53:47.382809] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:02.066 [2024-11-17 14:53:47.385944] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 273.450 ms, result 0 00:17:02.066 [2024-11-17 14:53:47.386992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.066 [2024-11-17 14:53:47.399799] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:03.008  [2024-11-17T14:53:49.492Z] Copying: 27/256 [MB] (27 MBps) [2024-11-17T14:53:50.436Z] Copying: 61/256 [MB] (34 MBps) [2024-11-17T14:53:51.823Z] Copying: 74/256 [MB] (13 MBps) [2024-11-17T14:53:52.766Z] Copying: 95/256 [MB] (20 MBps) [2024-11-17T14:53:53.709Z] Copying: 112/256 [MB] (17 MBps) [2024-11-17T14:53:54.652Z] Copying: 122/256 [MB] (10 MBps) [2024-11-17T14:53:55.594Z] Copying: 133/256 [MB] (10 MBps) [2024-11-17T14:53:56.538Z] Copying: 157/256 [MB] (23 MBps) [2024-11-17T14:53:57.482Z] Copying: 178/256 [MB] (21 MBps) [2024-11-17T14:53:58.425Z] Copying: 197/256 [MB] (19 MBps) [2024-11-17T14:53:59.813Z] Copying: 215/256 [MB] (17 MBps) [2024-11-17T14:54:00.757Z] Copying: 235/256 [MB] (19 MBps) [2024-11-17T14:54:01.018Z] Copying: 246/256 [MB] (11 MBps) [2024-11-17T14:54:01.018Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-17 14:54:00.921227] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:15.475 [2024-11-17 14:54:00.931569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.475 [2024-11-17 14:54:00.931610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:15.475 [2024-11-17 14:54:00.931626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:15.475 [2024-11-17 14:54:00.931635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.475 [2024-11-17 14:54:00.931659] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:15.475 [2024-11-17 14:54:00.934616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.475 [2024-11-17 14:54:00.934657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:15.475 [2024-11-17 14:54:00.934668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.943 ms 00:17:15.475 [2024-11-17 14:54:00.934677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.475 [2024-11-17 14:54:00.938069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.475 [2024-11-17 14:54:00.938110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:15.475 [2024-11-17 14:54:00.938121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.365 ms 00:17:15.475 [2024-11-17 14:54:00.938129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.475 [2024-11-17 14:54:00.946320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.475 [2024-11-17 14:54:00.946359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:15.475 [2024-11-17 14:54:00.946377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.173 ms 00:17:15.475 [2024-11-17 14:54:00.946386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.475 [2024-11-17 14:54:00.953336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.475 [2024-11-17 14:54:00.953370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:15.475 [2024-11-17 14:54:00.953381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.909 ms 00:17:15.475 [2024-11-17 14:54:00.953389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.475 [2024-11-17 14:54:00.978729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.475 [2024-11-17 14:54:00.978770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:15.475 [2024-11-17 14:54:00.978783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.281 ms 00:17:15.475 [2024-11-17 14:54:00.978790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.475 [2024-11-17 14:54:00.994578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.475 [2024-11-17 14:54:00.994619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:15.475 [2024-11-17 14:54:00.994638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.741 ms 00:17:15.475 [2024-11-17 14:54:00.994650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.475 [2024-11-17 14:54:00.994796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.475 [2024-11-17 14:54:00.994808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:15.475 [2024-11-17 14:54:00.994817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:15.475 [2024-11-17 14:54:00.994825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.737 [2024-11-17 14:54:01.020457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.737 [2024-11-17 14:54:01.020495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:15.737 [2024-11-17 14:54:01.020506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.615 ms 00:17:15.737 [2024-11-17 14:54:01.020513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.738 [2024-11-17 14:54:01.045611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.738 [2024-11-17 14:54:01.045650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:15.738 [2024-11-17 14:54:01.045661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.039 ms 00:17:15.738 [2024-11-17 14:54:01.045668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.738 [2024-11-17 14:54:01.069870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.738 [2024-11-17 14:54:01.069908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:15.738 [2024-11-17 14:54:01.069935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.155 ms 00:17:15.738 [2024-11-17 14:54:01.069943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.738 [2024-11-17 14:54:01.094196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.738 [2024-11-17 14:54:01.094233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:15.738 [2024-11-17 14:54:01.094244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.176 ms 00:17:15.738 [2024-11-17 14:54:01.094251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.738 [2024-11-17 14:54:01.094297] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:15.738 [2024-11-17 14:54:01.094320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:15.738 [2024-11-17 14:54:01.094899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.094906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.094914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.094934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.094942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.094949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.094956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.094964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.094971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.094978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.094985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.094993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:15.739 [2024-11-17 14:54:01.095111] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:15.739 [2024-11-17 14:54:01.095119] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 89855542-54d3-43a5-bee9-bade0b1117b4 00:17:15.739 [2024-11-17 14:54:01.095128] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:15.739 [2024-11-17 14:54:01.095135] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:15.739 [2024-11-17 14:54:01.095142] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:15.739 [2024-11-17 14:54:01.095150] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:15.739 [2024-11-17 14:54:01.095157] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:15.739 [2024-11-17 14:54:01.095165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:15.739 [2024-11-17 14:54:01.095173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:15.739 [2024-11-17 14:54:01.095180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:15.739 [2024-11-17 14:54:01.095187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:15.739 [2024-11-17 14:54:01.095194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.739 [2024-11-17 14:54:01.095202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:15.739 [2024-11-17 14:54:01.095214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:17:15.739 [2024-11-17 14:54:01.095222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.739 [2024-11-17 14:54:01.108611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.739 [2024-11-17 14:54:01.108647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:15.739 [2024-11-17 14:54:01.108657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.358 ms 00:17:15.739 [2024-11-17 14:54:01.108665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.739 [2024-11-17 14:54:01.109097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.739 [2024-11-17 14:54:01.109116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:15.739 [2024-11-17 14:54:01.109126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:17:15.739 [2024-11-17 14:54:01.109133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.739 [2024-11-17 14:54:01.147837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.739 [2024-11-17 14:54:01.147880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:15.739 [2024-11-17 14:54:01.147892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.739 [2024-11-17 14:54:01.147900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.739 [2024-11-17 14:54:01.148031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.739 [2024-11-17 14:54:01.148045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:15.739 [2024-11-17 14:54:01.148054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.739 [2024-11-17 14:54:01.148062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.739 [2024-11-17 14:54:01.148116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.739 [2024-11-17 14:54:01.148126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:15.739 [2024-11-17 14:54:01.148134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.739 [2024-11-17 14:54:01.148142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.739 [2024-11-17 14:54:01.148161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.739 [2024-11-17 14:54:01.148170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:15.739 [2024-11-17 14:54:01.148180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.739 [2024-11-17 14:54:01.148188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.739 [2024-11-17 14:54:01.231010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:15.739 [2024-11-17 14:54:01.231065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:15.739 [2024-11-17 14:54:01.231080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:15.739 [2024-11-17 14:54:01.231088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.000 [2024-11-17 14:54:01.299485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.000 [2024-11-17 14:54:01.299559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:16.000 [2024-11-17 14:54:01.299577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.000 [2024-11-17 14:54:01.299586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.000 [2024-11-17 14:54:01.299671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.000 [2024-11-17 14:54:01.299681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:16.000 [2024-11-17 14:54:01.299690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.000 [2024-11-17 14:54:01.299699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.000 [2024-11-17 14:54:01.299732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.000 [2024-11-17 14:54:01.299741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:16.000 [2024-11-17 14:54:01.299750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.000 [2024-11-17 14:54:01.299762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.000 [2024-11-17 14:54:01.299862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.000 [2024-11-17 14:54:01.299873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:16.000 [2024-11-17 14:54:01.299882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.000 [2024-11-17 14:54:01.299890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.001 [2024-11-17 14:54:01.299942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.001 [2024-11-17 14:54:01.299952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:16.001 [2024-11-17 14:54:01.299961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.001 [2024-11-17 14:54:01.299969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.001 [2024-11-17 14:54:01.300016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.001 [2024-11-17 14:54:01.300025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:16.001 [2024-11-17 14:54:01.300033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.001 [2024-11-17 14:54:01.300042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.001 [2024-11-17 14:54:01.300091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.001 [2024-11-17 14:54:01.300102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:16.001 [2024-11-17 14:54:01.300110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.001 [2024-11-17 14:54:01.300122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.001 [2024-11-17 14:54:01.300280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.701 ms, result 0 00:17:16.942 00:17:16.942 00:17:16.942 14:54:02 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73835 00:17:16.942 14:54:02 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:16.942 14:54:02 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73835 00:17:16.942 14:54:02 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73835 ']' 00:17:16.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.942 14:54:02 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.942 14:54:02 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.942 14:54:02 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.942 14:54:02 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.942 14:54:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:16.942 [2024-11-17 14:54:02.237997] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:16.942 [2024-11-17 14:54:02.238150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73835 ] 00:17:16.942 [2024-11-17 14:54:02.402401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.203 [2024-11-17 14:54:02.530934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.774 14:54:03 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.774 14:54:03 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:17.774 14:54:03 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:18.035 [2024-11-17 14:54:03.443756] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:18.035 [2024-11-17 14:54:03.443841] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:18.298 [2024-11-17 14:54:03.609085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.298 [2024-11-17 14:54:03.609151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:18.298 [2024-11-17 14:54:03.609169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:18.298 [2024-11-17 14:54:03.609179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.298 [2024-11-17 14:54:03.612212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.298 [2024-11-17 14:54:03.612269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.298 [2024-11-17 14:54:03.612282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:17:18.298 [2024-11-17 14:54:03.612290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.298 [2024-11-17 14:54:03.612429] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:18.298 [2024-11-17 14:54:03.613204] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:18.298 [2024-11-17 14:54:03.613238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.298 [2024-11-17 14:54:03.613247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.298 [2024-11-17 14:54:03.613260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:17:18.298 [2024-11-17 14:54:03.613267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.298 [2024-11-17 14:54:03.615488] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:18.298 [2024-11-17 14:54:03.631301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.298 [2024-11-17 14:54:03.631379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:18.298 [2024-11-17 14:54:03.631403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.823 ms 00:17:18.298 [2024-11-17 14:54:03.631414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.298 [2024-11-17 14:54:03.631561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.298 [2024-11-17 14:54:03.631578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:18.298 [2024-11-17 14:54:03.631588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:18.298 [2024-11-17 14:54:03.631598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.298 [2024-11-17 14:54:03.640305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.298 [2024-11-17 14:54:03.640363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.298 [2024-11-17 14:54:03.640373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.649 ms 00:17:18.298 [2024-11-17 14:54:03.640383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.298 [2024-11-17 14:54:03.640507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.298 [2024-11-17 14:54:03.640521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.298 [2024-11-17 14:54:03.640530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:18.298 [2024-11-17 14:54:03.640540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.298 [2024-11-17 14:54:03.640577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.298 [2024-11-17 14:54:03.640588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:18.298 [2024-11-17 14:54:03.640596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:18.298 [2024-11-17 14:54:03.640606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.298 [2024-11-17 14:54:03.640632] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:18.298 [2024-11-17 14:54:03.644869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.298 [2024-11-17 14:54:03.644912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.298 [2024-11-17 14:54:03.644938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.243 ms 00:17:18.298 [2024-11-17 14:54:03.644947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.298 [2024-11-17 14:54:03.645028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.298 [2024-11-17 14:54:03.645038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:18.298 [2024-11-17 14:54:03.645052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:18.298 [2024-11-17 14:54:03.645063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.298 [2024-11-17 14:54:03.645087] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:18.298 [2024-11-17 14:54:03.645107] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:18.298 [2024-11-17 14:54:03.645153] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:18.298 [2024-11-17 14:54:03.645169] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:18.298 [2024-11-17 14:54:03.645279] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:18.298 [2024-11-17 14:54:03.645290] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:18.298 [2024-11-17 14:54:03.645306] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:18.298 [2024-11-17 14:54:03.645320] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:18.298 [2024-11-17 14:54:03.645332] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:18.299 [2024-11-17 14:54:03.645341] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:18.299 [2024-11-17 14:54:03.645352] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:18.299 [2024-11-17 14:54:03.645361] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:18.299 [2024-11-17 14:54:03.645373] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:18.299 [2024-11-17 14:54:03.645381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.299 [2024-11-17 14:54:03.645391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:18.299 [2024-11-17 14:54:03.645400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:18.299 [2024-11-17 14:54:03.645409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.299 [2024-11-17 14:54:03.645499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.299 [2024-11-17 14:54:03.645520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:18.299 [2024-11-17 14:54:03.645530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:18.299 [2024-11-17 14:54:03.645539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.299 [2024-11-17 14:54:03.645647] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:18.299 [2024-11-17 14:54:03.645660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:18.299 [2024-11-17 14:54:03.645668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.299 [2024-11-17 14:54:03.645679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.299 [2024-11-17 14:54:03.645687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:18.299 [2024-11-17 14:54:03.645696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:18.299 [2024-11-17 14:54:03.645703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:18.299 [2024-11-17 14:54:03.645717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:18.299 [2024-11-17 14:54:03.645724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:18.299 [2024-11-17 14:54:03.645733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.299 [2024-11-17 14:54:03.645739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:18.299 [2024-11-17 14:54:03.645748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:18.299 [2024-11-17 14:54:03.645754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.299 [2024-11-17 14:54:03.645765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:18.299 [2024-11-17 14:54:03.645773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:18.299 [2024-11-17 14:54:03.645781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.299 [2024-11-17 14:54:03.645788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:18.299 [2024-11-17 14:54:03.645797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:18.299 [2024-11-17 14:54:03.645804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.299 [2024-11-17 14:54:03.645821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:18.299 [2024-11-17 14:54:03.645834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:18.299 [2024-11-17 14:54:03.645843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.299 [2024-11-17 14:54:03.645849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:18.299 [2024-11-17 14:54:03.645860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:18.299 [2024-11-17 14:54:03.645867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.299 [2024-11-17 14:54:03.645875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:18.299 [2024-11-17 14:54:03.645882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:18.299 [2024-11-17 14:54:03.645892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.299 [2024-11-17 14:54:03.645898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:18.299 [2024-11-17 14:54:03.645907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:18.299 [2024-11-17 14:54:03.645914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.299 [2024-11-17 14:54:03.645938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:18.299 [2024-11-17 14:54:03.645945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:18.299 [2024-11-17 14:54:03.645954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.299 [2024-11-17 14:54:03.645961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:18.299 [2024-11-17 14:54:03.645969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:18.299 [2024-11-17 14:54:03.645976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.299 [2024-11-17 14:54:03.645986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:18.299 [2024-11-17 14:54:03.645993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:18.299 [2024-11-17 14:54:03.646004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.299 [2024-11-17 14:54:03.646011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:18.299 [2024-11-17 14:54:03.646020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:18.299 [2024-11-17 14:54:03.646027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.299 [2024-11-17 14:54:03.646035] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:18.299 [2024-11-17 14:54:03.646044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:18.299 [2024-11-17 14:54:03.646059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.299 [2024-11-17 14:54:03.646067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.299 [2024-11-17 14:54:03.646077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:18.299 [2024-11-17 14:54:03.646084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:18.299 [2024-11-17 14:54:03.646093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:18.299 [2024-11-17 14:54:03.646100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:18.299 [2024-11-17 14:54:03.646109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:18.299 [2024-11-17 14:54:03.646116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:18.299 [2024-11-17 14:54:03.646126] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:18.299 [2024-11-17 14:54:03.646135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.299 [2024-11-17 14:54:03.646150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:18.299 [2024-11-17 14:54:03.646158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:18.299 [2024-11-17 14:54:03.646167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:18.299 [2024-11-17 14:54:03.646174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:18.299 [2024-11-17 14:54:03.646183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:18.299 [2024-11-17 14:54:03.646190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:18.299 [2024-11-17 14:54:03.646199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:18.299 [2024-11-17 14:54:03.646206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:18.299 [2024-11-17 14:54:03.646215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:18.299 [2024-11-17 14:54:03.646223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:18.299 [2024-11-17 14:54:03.646232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:18.299 [2024-11-17 14:54:03.646239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:18.299 [2024-11-17 14:54:03.646249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:18.299 [2024-11-17 14:54:03.646264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:18.299 [2024-11-17 14:54:03.646273] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:18.299 [2024-11-17 14:54:03.646283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.299 [2024-11-17 14:54:03.646295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:18.299 [2024-11-17 14:54:03.646302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:18.299 [2024-11-17 14:54:03.646310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:18.299 [2024-11-17 14:54:03.646318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:18.299 [2024-11-17 14:54:03.646327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.299 [2024-11-17 14:54:03.646335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:18.299 [2024-11-17 14:54:03.646348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:17:18.299 [2024-11-17 14:54:03.646356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.299 [2024-11-17 14:54:03.679387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.299 [2024-11-17 14:54:03.679440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.299 [2024-11-17 14:54:03.679456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.966 ms 00:17:18.299 [2024-11-17 14:54:03.679466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.299 [2024-11-17 14:54:03.679630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.299 [2024-11-17 14:54:03.679642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:18.300 [2024-11-17 14:54:03.679653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:18.300 [2024-11-17 14:54:03.679661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.300 [2024-11-17 14:54:03.715115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.300 [2024-11-17 14:54:03.715166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.300 [2024-11-17 14:54:03.715185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.426 ms 00:17:18.300 [2024-11-17 14:54:03.715193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.300 [2024-11-17 14:54:03.715287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.300 [2024-11-17 14:54:03.715297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.300 [2024-11-17 14:54:03.715309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:18.300 [2024-11-17 14:54:03.715317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.300 [2024-11-17 14:54:03.716349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.300 [2024-11-17 14:54:03.716415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.300 [2024-11-17 14:54:03.716434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:17:18.300 [2024-11-17 14:54:03.716443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.300 [2024-11-17 14:54:03.716621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.300 [2024-11-17 14:54:03.716632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.300 [2024-11-17 14:54:03.716643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:17:18.300 [2024-11-17 14:54:03.716651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.300 [2024-11-17 14:54:03.735047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.300 [2024-11-17 14:54:03.735100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.300 [2024-11-17 14:54:03.735115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.369 ms 00:17:18.300 [2024-11-17 14:54:03.735123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.300 [2024-11-17 14:54:03.749747] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:18.300 [2024-11-17 14:54:03.749799] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:18.300 [2024-11-17 14:54:03.749816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.300 [2024-11-17 14:54:03.749824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:18.300 [2024-11-17 14:54:03.749837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.562 ms 00:17:18.300 [2024-11-17 14:54:03.749845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.300 [2024-11-17 14:54:03.776195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.300 [2024-11-17 14:54:03.776249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:18.300 [2024-11-17 14:54:03.776265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.232 ms 00:17:18.300 [2024-11-17 14:54:03.776274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.300 [2024-11-17 14:54:03.789672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.300 [2024-11-17 14:54:03.789727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:18.300 [2024-11-17 14:54:03.789745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.293 ms 00:17:18.300 [2024-11-17 14:54:03.789752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.300 [2024-11-17 14:54:03.802811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.300 [2024-11-17 14:54:03.802862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:18.300 [2024-11-17 14:54:03.802878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.962 ms 00:17:18.300 [2024-11-17 14:54:03.802885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.300 [2024-11-17 14:54:03.803603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.300 [2024-11-17 14:54:03.803640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:18.300 [2024-11-17 14:54:03.803654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:17:18.300 [2024-11-17 14:54:03.803662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.562 [2024-11-17 14:54:03.877544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.562 [2024-11-17 14:54:03.877630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:18.562 [2024-11-17 14:54:03.877653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.847 ms 00:17:18.562 [2024-11-17 14:54:03.877662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.562 [2024-11-17 14:54:03.889363] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:18.562 [2024-11-17 14:54:03.909362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.562 [2024-11-17 14:54:03.909429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:18.562 [2024-11-17 14:54:03.909448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.577 ms 00:17:18.562 [2024-11-17 14:54:03.909459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.562 [2024-11-17 14:54:03.909557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.562 [2024-11-17 14:54:03.909572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:18.563 [2024-11-17 14:54:03.909582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:18.563 [2024-11-17 14:54:03.909592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.563 [2024-11-17 14:54:03.909649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.563 [2024-11-17 14:54:03.909661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:18.563 [2024-11-17 14:54:03.909670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:18.563 [2024-11-17 14:54:03.909680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.563 [2024-11-17 14:54:03.909709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.563 [2024-11-17 14:54:03.909720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:18.563 [2024-11-17 14:54:03.909728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:18.563 [2024-11-17 14:54:03.909741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.563 [2024-11-17 14:54:03.909778] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:18.563 [2024-11-17 14:54:03.909793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.563 [2024-11-17 14:54:03.909801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:18.563 [2024-11-17 14:54:03.909815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:18.563 [2024-11-17 14:54:03.909822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.563 [2024-11-17 14:54:03.936672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.563 [2024-11-17 14:54:03.936726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:18.563 [2024-11-17 14:54:03.936743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.816 ms 00:17:18.563 [2024-11-17 14:54:03.936751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.563 [2024-11-17 14:54:03.936874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.563 [2024-11-17 14:54:03.936885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:18.563 [2024-11-17 14:54:03.936896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:18.563 [2024-11-17 14:54:03.936907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.563 [2024-11-17 14:54:03.938074] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.563 [2024-11-17 14:54:03.941519] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.626 ms, result 0 00:17:18.563 [2024-11-17 14:54:03.943863] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:18.563 Some configs were skipped because the RPC state that can call them passed over. 00:17:18.563 14:54:03 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:18.824 [2024-11-17 14:54:04.180801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.824 [2024-11-17 14:54:04.180883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:18.824 [2024-11-17 14:54:04.180898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.289 ms 00:17:18.824 [2024-11-17 14:54:04.180910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.824 [2024-11-17 14:54:04.180962] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.457 ms, result 0 00:17:18.824 true 00:17:18.824 14:54:04 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:19.086 [2024-11-17 14:54:04.396440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.086 [2024-11-17 14:54:04.396506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:19.086 [2024-11-17 14:54:04.396521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:17:19.086 [2024-11-17 14:54:04.396529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.086 [2024-11-17 14:54:04.396571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.814 ms, result 0 00:17:19.086 true 00:17:19.086 14:54:04 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73835 00:17:19.086 14:54:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73835 ']' 00:17:19.086 14:54:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73835 00:17:19.086 14:54:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:19.086 14:54:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.086 14:54:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73835 00:17:19.086 killing process with pid 73835 00:17:19.086 14:54:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.086 14:54:04 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.086 14:54:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73835' 00:17:19.086 14:54:04 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73835 00:17:19.086 14:54:04 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73835 00:17:20.030 [2024-11-17 14:54:05.224381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.031 [2024-11-17 14:54:05.224465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.031 [2024-11-17 14:54:05.224481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:20.031 [2024-11-17 14:54:05.224492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.031 [2024-11-17 14:54:05.224516] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.031 [2024-11-17 14:54:05.227639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.031 [2024-11-17 14:54:05.227689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.031 [2024-11-17 14:54:05.227706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:17:20.031 [2024-11-17 14:54:05.227713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.031 [2024-11-17 14:54:05.228060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.031 [2024-11-17 14:54:05.228074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.031 [2024-11-17 14:54:05.228087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:17:20.031 [2024-11-17 14:54:05.228096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.031 [2024-11-17 14:54:05.233177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.031 [2024-11-17 14:54:05.233225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.031 [2024-11-17 14:54:05.233242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.056 ms 00:17:20.031 [2024-11-17 14:54:05.233250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.031 [2024-11-17 14:54:05.240661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.031 [2024-11-17 14:54:05.240707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:20.031 [2024-11-17 14:54:05.240722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.357 ms 00:17:20.031 [2024-11-17 14:54:05.240731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.031 [2024-11-17 14:54:05.252042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.031 [2024-11-17 14:54:05.252090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.031 [2024-11-17 14:54:05.252107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.235 ms 00:17:20.031 [2024-11-17 14:54:05.252122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.031 [2024-11-17 14:54:05.261583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.031 [2024-11-17 14:54:05.261633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.031 [2024-11-17 14:54:05.261649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.402 ms 00:17:20.031 [2024-11-17 14:54:05.261657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.031 [2024-11-17 14:54:05.261805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.031 [2024-11-17 14:54:05.261817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.031 [2024-11-17 14:54:05.261829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:20.031 [2024-11-17 14:54:05.261837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.031 [2024-11-17 14:54:05.273314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.031 [2024-11-17 14:54:05.273364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:20.031 [2024-11-17 14:54:05.273377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.450 ms 00:17:20.031 [2024-11-17 14:54:05.273384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.031 [2024-11-17 14:54:05.284411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.031 [2024-11-17 14:54:05.284460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:20.031 [2024-11-17 14:54:05.284478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.968 ms 00:17:20.031 [2024-11-17 14:54:05.284485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.031 [2024-11-17 14:54:05.294737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.031 [2024-11-17 14:54:05.294798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.031 [2024-11-17 14:54:05.294815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.192 ms 00:17:20.031 [2024-11-17 14:54:05.294822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.031 [2024-11-17 14:54:05.305029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.031 [2024-11-17 14:54:05.305078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.031 [2024-11-17 14:54:05.305093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.117 ms 00:17:20.031 [2024-11-17 14:54:05.305100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.031 [2024-11-17 14:54:05.305150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.031 [2024-11-17 14:54:05.305166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.031 [2024-11-17 14:54:05.305598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.305995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.306006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.306016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.306025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.306036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.306044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.306054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.306062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.306073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.306081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.306092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.032 [2024-11-17 14:54:05.306109] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.032 [2024-11-17 14:54:05.306124] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 89855542-54d3-43a5-bee9-bade0b1117b4 00:17:20.032 [2024-11-17 14:54:05.306139] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.032 [2024-11-17 14:54:05.306153] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.032 [2024-11-17 14:54:05.306161] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.032 [2024-11-17 14:54:05.306171] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.032 [2024-11-17 14:54:05.306178] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.032 [2024-11-17 14:54:05.306189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.032 [2024-11-17 14:54:05.306197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.032 [2024-11-17 14:54:05.306205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.032 [2024-11-17 14:54:05.306211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.032 [2024-11-17 14:54:05.306221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.032 [2024-11-17 14:54:05.306229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.032 [2024-11-17 14:54:05.306240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:17:20.032 [2024-11-17 14:54:05.306247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.032 [2024-11-17 14:54:05.319964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.032 [2024-11-17 14:54:05.320006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.032 [2024-11-17 14:54:05.320022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.673 ms 00:17:20.032 [2024-11-17 14:54:05.320031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.032 [2024-11-17 14:54:05.320479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.032 [2024-11-17 14:54:05.320524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.032 [2024-11-17 14:54:05.320537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:17:20.032 [2024-11-17 14:54:05.320549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.032 [2024-11-17 14:54:05.369966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.032 [2024-11-17 14:54:05.370017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.032 [2024-11-17 14:54:05.370031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.032 [2024-11-17 14:54:05.370040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.032 [2024-11-17 14:54:05.370140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.032 [2024-11-17 14:54:05.370150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.032 [2024-11-17 14:54:05.370163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.032 [2024-11-17 14:54:05.370174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.032 [2024-11-17 14:54:05.370227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.032 [2024-11-17 14:54:05.370239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.032 [2024-11-17 14:54:05.370252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.032 [2024-11-17 14:54:05.370259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.032 [2024-11-17 14:54:05.370279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.032 [2024-11-17 14:54:05.370286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.032 [2024-11-17 14:54:05.370296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.032 [2024-11-17 14:54:05.370303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.032 [2024-11-17 14:54:05.454680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.032 [2024-11-17 14:54:05.454742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.032 [2024-11-17 14:54:05.454759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.032 [2024-11-17 14:54:05.454768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.032 [2024-11-17 14:54:05.524408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.032 [2024-11-17 14:54:05.524465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.032 [2024-11-17 14:54:05.524480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.033 [2024-11-17 14:54:05.524492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.033 [2024-11-17 14:54:05.524567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.033 [2024-11-17 14:54:05.524578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.033 [2024-11-17 14:54:05.524593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.033 [2024-11-17 14:54:05.524601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.033 [2024-11-17 14:54:05.524637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.033 [2024-11-17 14:54:05.524647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.033 [2024-11-17 14:54:05.524661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.033 [2024-11-17 14:54:05.524669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.033 [2024-11-17 14:54:05.524774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.033 [2024-11-17 14:54:05.524786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.033 [2024-11-17 14:54:05.524797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.033 [2024-11-17 14:54:05.524804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.033 [2024-11-17 14:54:05.524842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.033 [2024-11-17 14:54:05.524852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.033 [2024-11-17 14:54:05.524863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.033 [2024-11-17 14:54:05.524871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.033 [2024-11-17 14:54:05.524945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.033 [2024-11-17 14:54:05.524958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.033 [2024-11-17 14:54:05.524972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.033 [2024-11-17 14:54:05.524981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.033 [2024-11-17 14:54:05.525033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.033 [2024-11-17 14:54:05.525044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.033 [2024-11-17 14:54:05.525055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.033 [2024-11-17 14:54:05.525064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.033 [2024-11-17 14:54:05.525223] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 300.811 ms, result 0 00:17:20.604 14:54:06 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:20.604 14:54:06 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:20.865 [2024-11-17 14:54:06.160401] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:20.865 [2024-11-17 14:54:06.160531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73894 ] 00:17:20.865 [2024-11-17 14:54:06.317871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.865 [2024-11-17 14:54:06.398911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.126 [2024-11-17 14:54:06.605271] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.126 [2024-11-17 14:54:06.605322] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.389 [2024-11-17 14:54:06.757536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.389 [2024-11-17 14:54:06.757574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:21.389 [2024-11-17 14:54:06.757585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:21.389 [2024-11-17 14:54:06.757591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.389 [2024-11-17 14:54:06.759672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.389 [2024-11-17 14:54:06.759702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.389 [2024-11-17 14:54:06.759710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.068 ms 00:17:21.389 [2024-11-17 14:54:06.759715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.389 [2024-11-17 14:54:06.759771] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:21.389 [2024-11-17 14:54:06.760317] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:21.389 [2024-11-17 14:54:06.760335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.389 [2024-11-17 14:54:06.760341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.389 [2024-11-17 14:54:06.760348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:17:21.389 [2024-11-17 14:54:06.760354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.389 [2024-11-17 14:54:06.761333] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:21.389 [2024-11-17 14:54:06.771023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.389 [2024-11-17 14:54:06.771054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:21.389 [2024-11-17 14:54:06.771063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.691 ms 00:17:21.389 [2024-11-17 14:54:06.771069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.389 [2024-11-17 14:54:06.771138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.389 [2024-11-17 14:54:06.771147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:21.389 [2024-11-17 14:54:06.771153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:21.389 [2024-11-17 14:54:06.771158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.389 [2024-11-17 14:54:06.775719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.389 [2024-11-17 14:54:06.775746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.389 [2024-11-17 14:54:06.775753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.532 ms 00:17:21.389 [2024-11-17 14:54:06.775759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.389 [2024-11-17 14:54:06.775830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.389 [2024-11-17 14:54:06.775837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.389 [2024-11-17 14:54:06.775843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:21.389 [2024-11-17 14:54:06.775849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.389 [2024-11-17 14:54:06.775864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.389 [2024-11-17 14:54:06.775872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:21.389 [2024-11-17 14:54:06.775879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:21.389 [2024-11-17 14:54:06.775884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.389 [2024-11-17 14:54:06.775901] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:21.389 [2024-11-17 14:54:06.778634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.389 [2024-11-17 14:54:06.778658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.389 [2024-11-17 14:54:06.778665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.736 ms 00:17:21.389 [2024-11-17 14:54:06.778670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.389 [2024-11-17 14:54:06.778697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.389 [2024-11-17 14:54:06.778703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:21.389 [2024-11-17 14:54:06.778710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:21.389 [2024-11-17 14:54:06.778715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.389 [2024-11-17 14:54:06.778728] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:21.389 [2024-11-17 14:54:06.778744] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:21.389 [2024-11-17 14:54:06.778770] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:21.389 [2024-11-17 14:54:06.778782] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:21.389 [2024-11-17 14:54:06.778859] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:21.389 [2024-11-17 14:54:06.778867] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:21.389 [2024-11-17 14:54:06.778875] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:21.389 [2024-11-17 14:54:06.778882] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:21.390 [2024-11-17 14:54:06.778890] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:21.390 [2024-11-17 14:54:06.778896] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:21.390 [2024-11-17 14:54:06.778902] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:21.390 [2024-11-17 14:54:06.778907] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:21.390 [2024-11-17 14:54:06.778912] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:21.390 [2024-11-17 14:54:06.778928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.390 [2024-11-17 14:54:06.778934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:21.390 [2024-11-17 14:54:06.778941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:17:21.390 [2024-11-17 14:54:06.778946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.390 [2024-11-17 14:54:06.779013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.390 [2024-11-17 14:54:06.779025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:21.390 [2024-11-17 14:54:06.779034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:21.390 [2024-11-17 14:54:06.779039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.390 [2024-11-17 14:54:06.779113] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:21.390 [2024-11-17 14:54:06.779121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:21.390 [2024-11-17 14:54:06.779127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.390 [2024-11-17 14:54:06.779133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:21.390 [2024-11-17 14:54:06.779145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:21.390 [2024-11-17 14:54:06.779154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:21.390 [2024-11-17 14:54:06.779159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.390 [2024-11-17 14:54:06.779170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:21.390 [2024-11-17 14:54:06.779175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:21.390 [2024-11-17 14:54:06.779180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.390 [2024-11-17 14:54:06.779190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:21.390 [2024-11-17 14:54:06.779196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:21.390 [2024-11-17 14:54:06.779201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:21.390 [2024-11-17 14:54:06.779210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:21.390 [2024-11-17 14:54:06.779215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:21.390 [2024-11-17 14:54:06.779225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.390 [2024-11-17 14:54:06.779235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:21.390 [2024-11-17 14:54:06.779240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.390 [2024-11-17 14:54:06.779250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:21.390 [2024-11-17 14:54:06.779255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.390 [2024-11-17 14:54:06.779265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:21.390 [2024-11-17 14:54:06.779270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.390 [2024-11-17 14:54:06.779279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:21.390 [2024-11-17 14:54:06.779284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.390 [2024-11-17 14:54:06.779295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:21.390 [2024-11-17 14:54:06.779300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:21.390 [2024-11-17 14:54:06.779305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.390 [2024-11-17 14:54:06.779310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:21.390 [2024-11-17 14:54:06.779315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:21.390 [2024-11-17 14:54:06.779320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:21.390 [2024-11-17 14:54:06.779329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:21.390 [2024-11-17 14:54:06.779335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779340] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:21.390 [2024-11-17 14:54:06.779345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:21.390 [2024-11-17 14:54:06.779351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.390 [2024-11-17 14:54:06.779358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.390 [2024-11-17 14:54:06.779363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:21.390 [2024-11-17 14:54:06.779368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:21.390 [2024-11-17 14:54:06.779373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:21.390 [2024-11-17 14:54:06.779378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:21.390 [2024-11-17 14:54:06.779382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:21.390 [2024-11-17 14:54:06.779387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:21.390 [2024-11-17 14:54:06.779394] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:21.390 [2024-11-17 14:54:06.779401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.390 [2024-11-17 14:54:06.779407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:21.390 [2024-11-17 14:54:06.779413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:21.390 [2024-11-17 14:54:06.779418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:21.390 [2024-11-17 14:54:06.779424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:21.390 [2024-11-17 14:54:06.779429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:21.390 [2024-11-17 14:54:06.779434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:21.390 [2024-11-17 14:54:06.779440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:21.390 [2024-11-17 14:54:06.779445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:21.390 [2024-11-17 14:54:06.779451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:21.390 [2024-11-17 14:54:06.779456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:21.390 [2024-11-17 14:54:06.779461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:21.390 [2024-11-17 14:54:06.779466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:21.390 [2024-11-17 14:54:06.779471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:21.390 [2024-11-17 14:54:06.779477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:21.390 [2024-11-17 14:54:06.779482] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:21.390 [2024-11-17 14:54:06.779488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.390 [2024-11-17 14:54:06.779493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:21.390 [2024-11-17 14:54:06.779499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:21.390 [2024-11-17 14:54:06.779504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:21.390 [2024-11-17 14:54:06.779518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:21.390 [2024-11-17 14:54:06.779524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.390 [2024-11-17 14:54:06.779529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:21.390 [2024-11-17 14:54:06.779537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:17:21.390 [2024-11-17 14:54:06.779542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.390 [2024-11-17 14:54:06.800489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.390 [2024-11-17 14:54:06.800518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.390 [2024-11-17 14:54:06.800526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.909 ms 00:17:21.390 [2024-11-17 14:54:06.800532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.390 [2024-11-17 14:54:06.800623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.390 [2024-11-17 14:54:06.800633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:21.391 [2024-11-17 14:54:06.800639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:21.391 [2024-11-17 14:54:06.800645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.391 [2024-11-17 14:54:06.837032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.391 [2024-11-17 14:54:06.837066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.391 [2024-11-17 14:54:06.837075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.371 ms 00:17:21.391 [2024-11-17 14:54:06.837084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.391 [2024-11-17 14:54:06.837143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.391 [2024-11-17 14:54:06.837152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.391 [2024-11-17 14:54:06.837159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:21.391 [2024-11-17 14:54:06.837165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.391 [2024-11-17 14:54:06.837462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.391 [2024-11-17 14:54:06.837485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.391 [2024-11-17 14:54:06.837492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:17:21.391 [2024-11-17 14:54:06.837498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.391 [2024-11-17 14:54:06.837608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.391 [2024-11-17 14:54:06.837615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.391 [2024-11-17 14:54:06.837623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:21.391 [2024-11-17 14:54:06.837628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.391 [2024-11-17 14:54:06.848485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.391 [2024-11-17 14:54:06.848515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.391 [2024-11-17 14:54:06.848523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.842 ms 00:17:21.391 [2024-11-17 14:54:06.848528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.391 [2024-11-17 14:54:06.858362] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:21.391 [2024-11-17 14:54:06.858392] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:21.391 [2024-11-17 14:54:06.858401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.391 [2024-11-17 14:54:06.858407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:21.391 [2024-11-17 14:54:06.858414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.801 ms 00:17:21.391 [2024-11-17 14:54:06.858420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.391 [2024-11-17 14:54:06.876976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.391 [2024-11-17 14:54:06.877013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:21.391 [2024-11-17 14:54:06.877022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.508 ms 00:17:21.391 [2024-11-17 14:54:06.877029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.391 [2024-11-17 14:54:06.885901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.391 [2024-11-17 14:54:06.885935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:21.391 [2024-11-17 14:54:06.885943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.817 ms 00:17:21.391 [2024-11-17 14:54:06.885949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.391 [2024-11-17 14:54:06.894616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.391 [2024-11-17 14:54:06.894642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:21.391 [2024-11-17 14:54:06.894650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.625 ms 00:17:21.391 [2024-11-17 14:54:06.894655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.391 [2024-11-17 14:54:06.895132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.391 [2024-11-17 14:54:06.895152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:21.391 [2024-11-17 14:54:06.895160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:17:21.391 [2024-11-17 14:54:06.895166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.653 [2024-11-17 14:54:06.939006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.653 [2024-11-17 14:54:06.939044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:21.653 [2024-11-17 14:54:06.939055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.822 ms 00:17:21.653 [2024-11-17 14:54:06.939061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.653 [2024-11-17 14:54:06.946790] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:21.653 [2024-11-17 14:54:06.958907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.653 [2024-11-17 14:54:06.958952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:21.653 [2024-11-17 14:54:06.958962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.779 ms 00:17:21.653 [2024-11-17 14:54:06.958968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.653 [2024-11-17 14:54:06.959045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.653 [2024-11-17 14:54:06.959053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:21.653 [2024-11-17 14:54:06.959060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:21.653 [2024-11-17 14:54:06.959065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.653 [2024-11-17 14:54:06.959101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.653 [2024-11-17 14:54:06.959107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:21.653 [2024-11-17 14:54:06.959114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:21.653 [2024-11-17 14:54:06.959119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.653 [2024-11-17 14:54:06.959140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.653 [2024-11-17 14:54:06.959148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:21.653 [2024-11-17 14:54:06.959154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:21.653 [2024-11-17 14:54:06.959160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.653 [2024-11-17 14:54:06.959182] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:21.653 [2024-11-17 14:54:06.959189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.653 [2024-11-17 14:54:06.959195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:21.653 [2024-11-17 14:54:06.959201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:21.653 [2024-11-17 14:54:06.959207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.653 [2024-11-17 14:54:06.977463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.653 [2024-11-17 14:54:06.977492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:21.653 [2024-11-17 14:54:06.977500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.241 ms 00:17:21.653 [2024-11-17 14:54:06.977507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.653 [2024-11-17 14:54:06.977578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.653 [2024-11-17 14:54:06.977586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:21.653 [2024-11-17 14:54:06.977593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:21.653 [2024-11-17 14:54:06.977599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.653 [2024-11-17 14:54:06.978218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:21.653 [2024-11-17 14:54:06.980619] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 220.462 ms, result 0 00:17:21.653 [2024-11-17 14:54:06.981257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:21.653 [2024-11-17 14:54:06.996218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:22.597  [2024-11-17T14:54:09.085Z] Copying: 29/256 [MB] (29 MBps) [2024-11-17T14:54:10.036Z] Copying: 40/256 [MB] (10 MBps) [2024-11-17T14:54:11.422Z] Copying: 66/256 [MB] (26 MBps) [2024-11-17T14:54:12.012Z] Copying: 84/256 [MB] (17 MBps) [2024-11-17T14:54:13.426Z] Copying: 96/256 [MB] (11 MBps) [2024-11-17T14:54:14.369Z] Copying: 113/256 [MB] (16 MBps) [2024-11-17T14:54:15.313Z] Copying: 129/256 [MB] (16 MBps) [2024-11-17T14:54:16.257Z] Copying: 148/256 [MB] (19 MBps) [2024-11-17T14:54:17.200Z] Copying: 168/256 [MB] (20 MBps) [2024-11-17T14:54:18.146Z] Copying: 187/256 [MB] (19 MBps) [2024-11-17T14:54:19.090Z] Copying: 203/256 [MB] (16 MBps) [2024-11-17T14:54:20.033Z] Copying: 218/256 [MB] (14 MBps) [2024-11-17T14:54:20.977Z] Copying: 241/256 [MB] (23 MBps) [2024-11-17T14:54:20.977Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-17 14:54:20.775271] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:35.434 [2024-11-17 14:54:20.785826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.434 [2024-11-17 14:54:20.785882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:35.434 [2024-11-17 14:54:20.785902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:35.434 [2024-11-17 14:54:20.785938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.434 [2024-11-17 14:54:20.785974] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:35.434 [2024-11-17 14:54:20.789054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.434 [2024-11-17 14:54:20.789109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:35.434 [2024-11-17 14:54:20.789126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.058 ms 00:17:35.434 [2024-11-17 14:54:20.789138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.434 [2024-11-17 14:54:20.789474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.434 [2024-11-17 14:54:20.789498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:35.434 [2024-11-17 14:54:20.789513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:17:35.434 [2024-11-17 14:54:20.789526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.434 [2024-11-17 14:54:20.793279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.434 [2024-11-17 14:54:20.793314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:35.434 [2024-11-17 14:54:20.793327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.729 ms 00:17:35.434 [2024-11-17 14:54:20.793339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.434 [2024-11-17 14:54:20.800347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.434 [2024-11-17 14:54:20.800392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:35.434 [2024-11-17 14:54:20.800408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.978 ms 00:17:35.434 [2024-11-17 14:54:20.800420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.434 [2024-11-17 14:54:20.826063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.434 [2024-11-17 14:54:20.826111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:35.434 [2024-11-17 14:54:20.826129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.561 ms 00:17:35.434 [2024-11-17 14:54:20.826141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.434 [2024-11-17 14:54:20.842112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.434 [2024-11-17 14:54:20.842173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:35.434 [2024-11-17 14:54:20.842190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.911 ms 00:17:35.434 [2024-11-17 14:54:20.842207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.434 [2024-11-17 14:54:20.842404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.434 [2024-11-17 14:54:20.842422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:35.434 [2024-11-17 14:54:20.842436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:35.434 [2024-11-17 14:54:20.842449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.434 [2024-11-17 14:54:20.868011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.434 [2024-11-17 14:54:20.868055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:35.434 [2024-11-17 14:54:20.868071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.527 ms 00:17:35.434 [2024-11-17 14:54:20.868082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.434 [2024-11-17 14:54:20.893329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.435 [2024-11-17 14:54:20.893384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:35.435 [2024-11-17 14:54:20.893400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.166 ms 00:17:35.435 [2024-11-17 14:54:20.893411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.435 [2024-11-17 14:54:20.918400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.435 [2024-11-17 14:54:20.918447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:35.435 [2024-11-17 14:54:20.918463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.915 ms 00:17:35.435 [2024-11-17 14:54:20.918473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.435 [2024-11-17 14:54:20.943223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.435 [2024-11-17 14:54:20.943275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:35.435 [2024-11-17 14:54:20.943291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.638 ms 00:17:35.435 [2024-11-17 14:54:20.943301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.435 [2024-11-17 14:54:20.943359] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:35.435 [2024-11-17 14:54:20.943381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.943987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:35.435 [2024-11-17 14:54:20.944442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:35.436 [2024-11-17 14:54:20.944738] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:35.436 [2024-11-17 14:54:20.944751] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 89855542-54d3-43a5-bee9-bade0b1117b4 00:17:35.436 [2024-11-17 14:54:20.944766] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:35.436 [2024-11-17 14:54:20.944778] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:35.436 [2024-11-17 14:54:20.944791] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:35.436 [2024-11-17 14:54:20.944807] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:35.436 [2024-11-17 14:54:20.944820] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:35.436 [2024-11-17 14:54:20.944833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:35.436 [2024-11-17 14:54:20.944845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:35.436 [2024-11-17 14:54:20.944857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:35.436 [2024-11-17 14:54:20.944868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:35.436 [2024-11-17 14:54:20.944880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.436 [2024-11-17 14:54:20.944896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:35.436 [2024-11-17 14:54:20.944911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.522 ms 00:17:35.436 [2024-11-17 14:54:20.944943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.436 [2024-11-17 14:54:20.960503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.436 [2024-11-17 14:54:20.960548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:35.436 [2024-11-17 14:54:20.960565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.507 ms 00:17:35.436 [2024-11-17 14:54:20.960576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.436 [2024-11-17 14:54:20.961064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.436 [2024-11-17 14:54:20.961098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:35.436 [2024-11-17 14:54:20.961112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:17:35.436 [2024-11-17 14:54:20.961124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.000226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.697 [2024-11-17 14:54:21.000276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.697 [2024-11-17 14:54:21.000291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.697 [2024-11-17 14:54:21.000303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.000446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.697 [2024-11-17 14:54:21.000463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.697 [2024-11-17 14:54:21.000477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.697 [2024-11-17 14:54:21.000490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.000560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.697 [2024-11-17 14:54:21.000574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.697 [2024-11-17 14:54:21.000588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.697 [2024-11-17 14:54:21.000600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.000627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.697 [2024-11-17 14:54:21.000646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.697 [2024-11-17 14:54:21.000659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.697 [2024-11-17 14:54:21.000671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.086139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.697 [2024-11-17 14:54:21.086193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.697 [2024-11-17 14:54:21.086210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.697 [2024-11-17 14:54:21.086223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.155895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.697 [2024-11-17 14:54:21.155979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.697 [2024-11-17 14:54:21.155997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.697 [2024-11-17 14:54:21.156010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.156111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.697 [2024-11-17 14:54:21.156125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.697 [2024-11-17 14:54:21.156139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.697 [2024-11-17 14:54:21.156152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.156200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.697 [2024-11-17 14:54:21.156214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.697 [2024-11-17 14:54:21.156235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.697 [2024-11-17 14:54:21.156248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.156396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.697 [2024-11-17 14:54:21.156411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.697 [2024-11-17 14:54:21.156427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.697 [2024-11-17 14:54:21.156439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.156489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.697 [2024-11-17 14:54:21.156505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:35.697 [2024-11-17 14:54:21.156519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.697 [2024-11-17 14:54:21.156537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.156599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.697 [2024-11-17 14:54:21.156614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.697 [2024-11-17 14:54:21.156628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.697 [2024-11-17 14:54:21.156640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.156710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.697 [2024-11-17 14:54:21.156726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.697 [2024-11-17 14:54:21.156745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.697 [2024-11-17 14:54:21.156757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-11-17 14:54:21.156994] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.113 ms, result 0 00:17:36.642 00:17:36.642 00:17:36.642 14:54:21 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:36.642 14:54:21 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:37.215 14:54:22 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:37.215 [2024-11-17 14:54:22.546192] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:37.215 [2024-11-17 14:54:22.546344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74065 ] 00:17:37.215 [2024-11-17 14:54:22.707961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.476 [2024-11-17 14:54:22.826636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.736 [2024-11-17 14:54:23.116052] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:37.736 [2024-11-17 14:54:23.116139] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:37.998 [2024-11-17 14:54:23.278366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.998 [2024-11-17 14:54:23.278433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:37.998 [2024-11-17 14:54:23.278454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:37.998 [2024-11-17 14:54:23.278467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.998 [2024-11-17 14:54:23.281534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.998 [2024-11-17 14:54:23.281588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:37.998 [2024-11-17 14:54:23.281604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:17:37.998 [2024-11-17 14:54:23.281616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.998 [2024-11-17 14:54:23.281767] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:37.998 [2024-11-17 14:54:23.282605] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:37.998 [2024-11-17 14:54:23.282645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.998 [2024-11-17 14:54:23.282658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:37.998 [2024-11-17 14:54:23.282672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:17:37.998 [2024-11-17 14:54:23.282684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.998 [2024-11-17 14:54:23.284527] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:37.998 [2024-11-17 14:54:23.298801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.998 [2024-11-17 14:54:23.298858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:37.998 [2024-11-17 14:54:23.298878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.276 ms 00:17:37.998 [2024-11-17 14:54:23.298891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.998 [2024-11-17 14:54:23.299045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.998 [2024-11-17 14:54:23.299065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:37.998 [2024-11-17 14:54:23.299081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:37.999 [2024-11-17 14:54:23.299095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.999 [2024-11-17 14:54:23.307163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.999 [2024-11-17 14:54:23.307210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:37.999 [2024-11-17 14:54:23.307226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.002 ms 00:17:37.999 [2024-11-17 14:54:23.307237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.999 [2024-11-17 14:54:23.307373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.999 [2024-11-17 14:54:23.307389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:37.999 [2024-11-17 14:54:23.307403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:37.999 [2024-11-17 14:54:23.307415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.999 [2024-11-17 14:54:23.307454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.999 [2024-11-17 14:54:23.307472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:37.999 [2024-11-17 14:54:23.307486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:37.999 [2024-11-17 14:54:23.307499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.999 [2024-11-17 14:54:23.307565] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:37.999 [2024-11-17 14:54:23.311653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.999 [2024-11-17 14:54:23.311693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:37.999 [2024-11-17 14:54:23.311708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.096 ms 00:17:37.999 [2024-11-17 14:54:23.311719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.999 [2024-11-17 14:54:23.311823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.999 [2024-11-17 14:54:23.311841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:37.999 [2024-11-17 14:54:23.311857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:37.999 [2024-11-17 14:54:23.311870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.999 [2024-11-17 14:54:23.311902] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:37.999 [2024-11-17 14:54:23.311955] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:37.999 [2024-11-17 14:54:23.312009] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:37.999 [2024-11-17 14:54:23.312036] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:37.999 [2024-11-17 14:54:23.312184] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:37.999 [2024-11-17 14:54:23.312203] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:37.999 [2024-11-17 14:54:23.312219] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:37.999 [2024-11-17 14:54:23.312237] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:37.999 [2024-11-17 14:54:23.312255] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:37.999 [2024-11-17 14:54:23.312269] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:37.999 [2024-11-17 14:54:23.312282] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:37.999 [2024-11-17 14:54:23.312295] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:37.999 [2024-11-17 14:54:23.312308] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:37.999 [2024-11-17 14:54:23.312321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.999 [2024-11-17 14:54:23.312334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:37.999 [2024-11-17 14:54:23.312347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:17:37.999 [2024-11-17 14:54:23.312359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.999 [2024-11-17 14:54:23.312486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.999 [2024-11-17 14:54:23.312501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:37.999 [2024-11-17 14:54:23.312518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:37.999 [2024-11-17 14:54:23.312530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.999 [2024-11-17 14:54:23.312671] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:37.999 [2024-11-17 14:54:23.312687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:37.999 [2024-11-17 14:54:23.312702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:37.999 [2024-11-17 14:54:23.312716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.999 [2024-11-17 14:54:23.312729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:37.999 [2024-11-17 14:54:23.312742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:37.999 [2024-11-17 14:54:23.312755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:37.999 [2024-11-17 14:54:23.312768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:37.999 [2024-11-17 14:54:23.312781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:37.999 [2024-11-17 14:54:23.312792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:37.999 [2024-11-17 14:54:23.312805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:37.999 [2024-11-17 14:54:23.312816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:37.999 [2024-11-17 14:54:23.312828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:37.999 [2024-11-17 14:54:23.312849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:37.999 [2024-11-17 14:54:23.312861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:37.999 [2024-11-17 14:54:23.312872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.999 [2024-11-17 14:54:23.312883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:37.999 [2024-11-17 14:54:23.312894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:37.999 [2024-11-17 14:54:23.312905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.999 [2024-11-17 14:54:23.312933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:37.999 [2024-11-17 14:54:23.312946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:37.999 [2024-11-17 14:54:23.312957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.999 [2024-11-17 14:54:23.312968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:37.999 [2024-11-17 14:54:23.312979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:37.999 [2024-11-17 14:54:23.312991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.999 [2024-11-17 14:54:23.313002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:37.999 [2024-11-17 14:54:23.313014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:37.999 [2024-11-17 14:54:23.313026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.999 [2024-11-17 14:54:23.313037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:37.999 [2024-11-17 14:54:23.313049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:37.999 [2024-11-17 14:54:23.313061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.999 [2024-11-17 14:54:23.313071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:37.999 [2024-11-17 14:54:23.313082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:37.999 [2024-11-17 14:54:23.313094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:37.999 [2024-11-17 14:54:23.313105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:37.999 [2024-11-17 14:54:23.313116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:37.999 [2024-11-17 14:54:23.313128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:37.999 [2024-11-17 14:54:23.313142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:37.999 [2024-11-17 14:54:23.313155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:37.999 [2024-11-17 14:54:23.313166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.999 [2024-11-17 14:54:23.313177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:37.999 [2024-11-17 14:54:23.313190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:37.999 [2024-11-17 14:54:23.313201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.999 [2024-11-17 14:54:23.313216] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:37.999 [2024-11-17 14:54:23.313229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:37.999 [2024-11-17 14:54:23.313242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:37.999 [2024-11-17 14:54:23.313257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.999 [2024-11-17 14:54:23.313270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:37.999 [2024-11-17 14:54:23.313282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:37.999 [2024-11-17 14:54:23.313294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:37.999 [2024-11-17 14:54:23.313306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:37.999 [2024-11-17 14:54:23.313317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:37.999 [2024-11-17 14:54:23.313329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:37.999 [2024-11-17 14:54:23.313343] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:37.999 [2024-11-17 14:54:23.313360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:37.999 [2024-11-17 14:54:23.313374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:37.999 [2024-11-17 14:54:23.313387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:37.999 [2024-11-17 14:54:23.313399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:37.999 [2024-11-17 14:54:23.313412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:38.000 [2024-11-17 14:54:23.313425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:38.000 [2024-11-17 14:54:23.313437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:38.000 [2024-11-17 14:54:23.313450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:38.000 [2024-11-17 14:54:23.313463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:38.000 [2024-11-17 14:54:23.313476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:38.000 [2024-11-17 14:54:23.313489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:38.000 [2024-11-17 14:54:23.313502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:38.000 [2024-11-17 14:54:23.313514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:38.000 [2024-11-17 14:54:23.313528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:38.000 [2024-11-17 14:54:23.313542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:38.000 [2024-11-17 14:54:23.313555] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:38.000 [2024-11-17 14:54:23.313570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:38.000 [2024-11-17 14:54:23.313585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:38.000 [2024-11-17 14:54:23.313598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:38.000 [2024-11-17 14:54:23.313612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:38.000 [2024-11-17 14:54:23.313625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:38.000 [2024-11-17 14:54:23.313639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.313652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:38.000 [2024-11-17 14:54:23.313669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:17:38.000 [2024-11-17 14:54:23.313681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.000 [2024-11-17 14:54:23.346144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.346194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:38.000 [2024-11-17 14:54:23.346211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.383 ms 00:17:38.000 [2024-11-17 14:54:23.346223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.000 [2024-11-17 14:54:23.346399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.346416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:38.000 [2024-11-17 14:54:23.346431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:38.000 [2024-11-17 14:54:23.346444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.000 [2024-11-17 14:54:23.390762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.390811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:38.000 [2024-11-17 14:54:23.390833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.284 ms 00:17:38.000 [2024-11-17 14:54:23.390846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.000 [2024-11-17 14:54:23.391002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.391021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.000 [2024-11-17 14:54:23.391038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:38.000 [2024-11-17 14:54:23.391051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.000 [2024-11-17 14:54:23.391685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.391726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.000 [2024-11-17 14:54:23.391743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:17:38.000 [2024-11-17 14:54:23.391761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.000 [2024-11-17 14:54:23.391990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.392015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.000 [2024-11-17 14:54:23.392029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:17:38.000 [2024-11-17 14:54:23.392041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.000 [2024-11-17 14:54:23.408762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.408814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.000 [2024-11-17 14:54:23.408830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.686 ms 00:17:38.000 [2024-11-17 14:54:23.408842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.000 [2024-11-17 14:54:23.423398] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:38.000 [2024-11-17 14:54:23.423447] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:38.000 [2024-11-17 14:54:23.423465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.423479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:38.000 [2024-11-17 14:54:23.423492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.451 ms 00:17:38.000 [2024-11-17 14:54:23.423503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.000 [2024-11-17 14:54:23.449212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.449274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:38.000 [2024-11-17 14:54:23.449292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.580 ms 00:17:38.000 [2024-11-17 14:54:23.449304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.000 [2024-11-17 14:54:23.462415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.462462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:38.000 [2024-11-17 14:54:23.462478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.995 ms 00:17:38.000 [2024-11-17 14:54:23.462489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.000 [2024-11-17 14:54:23.474947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.474994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:38.000 [2024-11-17 14:54:23.475012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.345 ms 00:17:38.000 [2024-11-17 14:54:23.475023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.000 [2024-11-17 14:54:23.475762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.000 [2024-11-17 14:54:23.475805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:38.000 [2024-11-17 14:54:23.475821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:17:38.000 [2024-11-17 14:54:23.475833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.261 [2024-11-17 14:54:23.539572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.261 [2024-11-17 14:54:23.539640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:38.261 [2024-11-17 14:54:23.539664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.696 ms 00:17:38.261 [2024-11-17 14:54:23.539677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.261 [2024-11-17 14:54:23.551086] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:38.261 [2024-11-17 14:54:23.570235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.261 [2024-11-17 14:54:23.570297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:38.261 [2024-11-17 14:54:23.570317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.425 ms 00:17:38.261 [2024-11-17 14:54:23.570337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.261 [2024-11-17 14:54:23.570467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.261 [2024-11-17 14:54:23.570485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:38.261 [2024-11-17 14:54:23.570500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:38.261 [2024-11-17 14:54:23.570513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.261 [2024-11-17 14:54:23.570596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.261 [2024-11-17 14:54:23.570612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:38.261 [2024-11-17 14:54:23.570625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:38.261 [2024-11-17 14:54:23.570643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.261 [2024-11-17 14:54:23.570684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.261 [2024-11-17 14:54:23.570700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:38.261 [2024-11-17 14:54:23.570713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:38.261 [2024-11-17 14:54:23.570726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.261 [2024-11-17 14:54:23.570773] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:38.261 [2024-11-17 14:54:23.570789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.261 [2024-11-17 14:54:23.570802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:38.261 [2024-11-17 14:54:23.570816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:38.261 [2024-11-17 14:54:23.570829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.261 [2024-11-17 14:54:23.596544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.261 [2024-11-17 14:54:23.596614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:38.261 [2024-11-17 14:54:23.596634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.681 ms 00:17:38.261 [2024-11-17 14:54:23.596645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.261 [2024-11-17 14:54:23.596821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.261 [2024-11-17 14:54:23.596841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:38.261 [2024-11-17 14:54:23.596857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:38.261 [2024-11-17 14:54:23.596872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.261 [2024-11-17 14:54:23.597994] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:38.261 [2024-11-17 14:54:23.601367] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 319.266 ms, result 0 00:17:38.261 [2024-11-17 14:54:23.602894] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:38.261 [2024-11-17 14:54:23.616515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:38.522  [2024-11-17T14:54:24.065Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-11-17 14:54:24.012560] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:38.522 [2024-11-17 14:54:24.021595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.522 [2024-11-17 14:54:24.021644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:38.522 [2024-11-17 14:54:24.021662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:38.522 [2024-11-17 14:54:24.021682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.522 [2024-11-17 14:54:24.021714] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:38.522 [2024-11-17 14:54:24.024744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.522 [2024-11-17 14:54:24.024789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:38.522 [2024-11-17 14:54:24.024804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.009 ms 00:17:38.522 [2024-11-17 14:54:24.024816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.522 [2024-11-17 14:54:24.027787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.522 [2024-11-17 14:54:24.027836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:38.522 [2024-11-17 14:54:24.027852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.914 ms 00:17:38.522 [2024-11-17 14:54:24.027863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.522 [2024-11-17 14:54:24.032357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.522 [2024-11-17 14:54:24.032409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:38.522 [2024-11-17 14:54:24.032425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.470 ms 00:17:38.523 [2024-11-17 14:54:24.032436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.523 [2024-11-17 14:54:24.039581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.523 [2024-11-17 14:54:24.039626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:38.523 [2024-11-17 14:54:24.039642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.098 ms 00:17:38.523 [2024-11-17 14:54:24.039654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.785 [2024-11-17 14:54:24.064357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.785 [2024-11-17 14:54:24.064410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:38.785 [2024-11-17 14:54:24.064428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.619 ms 00:17:38.785 [2024-11-17 14:54:24.064440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.785 [2024-11-17 14:54:24.080015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.785 [2024-11-17 14:54:24.080067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:38.785 [2024-11-17 14:54:24.080093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.517 ms 00:17:38.785 [2024-11-17 14:54:24.080104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.785 [2024-11-17 14:54:24.080299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.785 [2024-11-17 14:54:24.080322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:38.785 [2024-11-17 14:54:24.080337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:38.785 [2024-11-17 14:54:24.080351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.785 [2024-11-17 14:54:24.105614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.785 [2024-11-17 14:54:24.105663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:38.785 [2024-11-17 14:54:24.105681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.228 ms 00:17:38.785 [2024-11-17 14:54:24.105692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.785 [2024-11-17 14:54:24.130909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.785 [2024-11-17 14:54:24.130973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:38.785 [2024-11-17 14:54:24.130990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.156 ms 00:17:38.785 [2024-11-17 14:54:24.131001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.785 [2024-11-17 14:54:24.155298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.785 [2024-11-17 14:54:24.155344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:38.785 [2024-11-17 14:54:24.155361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.240 ms 00:17:38.785 [2024-11-17 14:54:24.155371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.785 [2024-11-17 14:54:24.179860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.785 [2024-11-17 14:54:24.179909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:38.785 [2024-11-17 14:54:24.179937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.397 ms 00:17:38.785 [2024-11-17 14:54:24.179947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.785 [2024-11-17 14:54:24.180006] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:38.785 [2024-11-17 14:54:24.180026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:38.785 [2024-11-17 14:54:24.180608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.180993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:38.786 [2024-11-17 14:54:24.181380] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:38.786 [2024-11-17 14:54:24.181392] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 89855542-54d3-43a5-bee9-bade0b1117b4 00:17:38.786 [2024-11-17 14:54:24.181406] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:38.786 [2024-11-17 14:54:24.181418] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:38.786 [2024-11-17 14:54:24.181430] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:38.786 [2024-11-17 14:54:24.181443] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:38.786 [2024-11-17 14:54:24.181455] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:38.786 [2024-11-17 14:54:24.181469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:38.786 [2024-11-17 14:54:24.181487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:38.786 [2024-11-17 14:54:24.181499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:38.786 [2024-11-17 14:54:24.181510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:38.786 [2024-11-17 14:54:24.181523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.786 [2024-11-17 14:54:24.181536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:38.786 [2024-11-17 14:54:24.181550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.518 ms 00:17:38.786 [2024-11-17 14:54:24.181563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.786 [2024-11-17 14:54:24.195917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.786 [2024-11-17 14:54:24.195990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:38.786 [2024-11-17 14:54:24.196006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.305 ms 00:17:38.786 [2024-11-17 14:54:24.196018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.786 [2024-11-17 14:54:24.196493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.786 [2024-11-17 14:54:24.196526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:38.786 [2024-11-17 14:54:24.196541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:17:38.786 [2024-11-17 14:54:24.196553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.786 [2024-11-17 14:54:24.235459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.786 [2024-11-17 14:54:24.235509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.786 [2024-11-17 14:54:24.235536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.786 [2024-11-17 14:54:24.235555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.786 [2024-11-17 14:54:24.235680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.786 [2024-11-17 14:54:24.235697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.786 [2024-11-17 14:54:24.235712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.786 [2024-11-17 14:54:24.235724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.786 [2024-11-17 14:54:24.235791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.786 [2024-11-17 14:54:24.235805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.786 [2024-11-17 14:54:24.235819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.786 [2024-11-17 14:54:24.235831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.786 [2024-11-17 14:54:24.235864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.786 [2024-11-17 14:54:24.235878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.786 [2024-11-17 14:54:24.235891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.786 [2024-11-17 14:54:24.235904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.786 [2024-11-17 14:54:24.320773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.786 [2024-11-17 14:54:24.320827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:38.786 [2024-11-17 14:54:24.320846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.786 [2024-11-17 14:54:24.320858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.047 [2024-11-17 14:54:24.390789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.047 [2024-11-17 14:54:24.390849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.047 [2024-11-17 14:54:24.390865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.047 [2024-11-17 14:54:24.390877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.047 [2024-11-17 14:54:24.391000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.047 [2024-11-17 14:54:24.391017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.047 [2024-11-17 14:54:24.391031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.047 [2024-11-17 14:54:24.391045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.047 [2024-11-17 14:54:24.391091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.047 [2024-11-17 14:54:24.391112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.047 [2024-11-17 14:54:24.391126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.047 [2024-11-17 14:54:24.391140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.047 [2024-11-17 14:54:24.391283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.047 [2024-11-17 14:54:24.391300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.047 [2024-11-17 14:54:24.391313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.047 [2024-11-17 14:54:24.391326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.047 [2024-11-17 14:54:24.391384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.047 [2024-11-17 14:54:24.391408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.047 [2024-11-17 14:54:24.391426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.047 [2024-11-17 14:54:24.391440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.047 [2024-11-17 14:54:24.391499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.047 [2024-11-17 14:54:24.391542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.047 [2024-11-17 14:54:24.391557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.047 [2024-11-17 14:54:24.391570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.047 [2024-11-17 14:54:24.391644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.047 [2024-11-17 14:54:24.391665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.047 [2024-11-17 14:54:24.391679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.047 [2024-11-17 14:54:24.391691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.047 [2024-11-17 14:54:24.391902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.257 ms, result 0 00:17:39.620 00:17:39.620 00:17:39.620 14:54:25 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74095 00:17:39.620 14:54:25 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:39.620 14:54:25 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74095 00:17:39.620 14:54:25 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 74095 ']' 00:17:39.620 14:54:25 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.620 14:54:25 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.620 14:54:25 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.620 14:54:25 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.620 14:54:25 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:39.882 [2024-11-17 14:54:25.232637] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:39.882 [2024-11-17 14:54:25.232792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74095 ] 00:17:39.882 [2024-11-17 14:54:25.393828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.144 [2024-11-17 14:54:25.519008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.716 14:54:26 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.716 14:54:26 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:40.716 14:54:26 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:40.977 [2024-11-17 14:54:26.420948] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.977 [2024-11-17 14:54:26.421046] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:41.240 [2024-11-17 14:54:26.599704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.240 [2024-11-17 14:54:26.599774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:41.240 [2024-11-17 14:54:26.599798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.240 [2024-11-17 14:54:26.599810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.240 [2024-11-17 14:54:26.602862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.240 [2024-11-17 14:54:26.602916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.240 [2024-11-17 14:54:26.602953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.023 ms 00:17:41.240 [2024-11-17 14:54:26.602965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.240 [2024-11-17 14:54:26.603118] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:41.240 [2024-11-17 14:54:26.604021] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:41.240 [2024-11-17 14:54:26.604074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.240 [2024-11-17 14:54:26.604088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.240 [2024-11-17 14:54:26.604106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:17:41.240 [2024-11-17 14:54:26.604119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.240 [2024-11-17 14:54:26.605966] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:41.240 [2024-11-17 14:54:26.620265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.240 [2024-11-17 14:54:26.620327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:41.240 [2024-11-17 14:54:26.620348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.307 ms 00:17:41.240 [2024-11-17 14:54:26.620363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.240 [2024-11-17 14:54:26.620499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.240 [2024-11-17 14:54:26.620523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:41.240 [2024-11-17 14:54:26.620538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:41.240 [2024-11-17 14:54:26.620554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.240 [2024-11-17 14:54:26.628662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.240 [2024-11-17 14:54:26.628715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.240 [2024-11-17 14:54:26.628730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.031 ms 00:17:41.240 [2024-11-17 14:54:26.628744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.240 [2024-11-17 14:54:26.628891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.240 [2024-11-17 14:54:26.628911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.240 [2024-11-17 14:54:26.628955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:41.240 [2024-11-17 14:54:26.628970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.240 [2024-11-17 14:54:26.629023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.240 [2024-11-17 14:54:26.629040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:41.240 [2024-11-17 14:54:26.629054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:41.240 [2024-11-17 14:54:26.629068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.240 [2024-11-17 14:54:26.629104] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:41.240 [2024-11-17 14:54:26.633175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.240 [2024-11-17 14:54:26.633219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.240 [2024-11-17 14:54:26.633237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.076 ms 00:17:41.240 [2024-11-17 14:54:26.633249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.240 [2024-11-17 14:54:26.633352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.240 [2024-11-17 14:54:26.633370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:41.240 [2024-11-17 14:54:26.633388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:41.240 [2024-11-17 14:54:26.633404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.240 [2024-11-17 14:54:26.633438] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:41.240 [2024-11-17 14:54:26.633481] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:41.240 [2024-11-17 14:54:26.633546] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:41.240 [2024-11-17 14:54:26.633570] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:41.240 [2024-11-17 14:54:26.633721] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:41.240 [2024-11-17 14:54:26.633787] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:41.240 [2024-11-17 14:54:26.633813] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:41.240 [2024-11-17 14:54:26.633837] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:41.240 [2024-11-17 14:54:26.633856] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:41.240 [2024-11-17 14:54:26.633870] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:41.240 [2024-11-17 14:54:26.633886] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:41.240 [2024-11-17 14:54:26.633898] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:41.240 [2024-11-17 14:54:26.633934] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:41.240 [2024-11-17 14:54:26.633950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.240 [2024-11-17 14:54:26.633967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:41.240 [2024-11-17 14:54:26.633980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:17:41.240 [2024-11-17 14:54:26.633996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.240 [2024-11-17 14:54:26.634126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.240 [2024-11-17 14:54:26.634155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:41.240 [2024-11-17 14:54:26.634168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:41.240 [2024-11-17 14:54:26.634184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.240 [2024-11-17 14:54:26.634333] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:41.240 [2024-11-17 14:54:26.634363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:41.240 [2024-11-17 14:54:26.634378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.240 [2024-11-17 14:54:26.634394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.240 [2024-11-17 14:54:26.634408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:41.240 [2024-11-17 14:54:26.634422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:41.240 [2024-11-17 14:54:26.634434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:41.240 [2024-11-17 14:54:26.634453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:41.240 [2024-11-17 14:54:26.634465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:41.240 [2024-11-17 14:54:26.634479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.240 [2024-11-17 14:54:26.634490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:41.240 [2024-11-17 14:54:26.634504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:41.240 [2024-11-17 14:54:26.634516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.240 [2024-11-17 14:54:26.634530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:41.241 [2024-11-17 14:54:26.634542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:41.241 [2024-11-17 14:54:26.634556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.241 [2024-11-17 14:54:26.634570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:41.241 [2024-11-17 14:54:26.634585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:41.241 [2024-11-17 14:54:26.634596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.241 [2024-11-17 14:54:26.634610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:41.241 [2024-11-17 14:54:26.634629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:41.241 [2024-11-17 14:54:26.634644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.241 [2024-11-17 14:54:26.634655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:41.241 [2024-11-17 14:54:26.634673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:41.241 [2024-11-17 14:54:26.634684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.241 [2024-11-17 14:54:26.634697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:41.241 [2024-11-17 14:54:26.634710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:41.241 [2024-11-17 14:54:26.634723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.241 [2024-11-17 14:54:26.634735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:41.241 [2024-11-17 14:54:26.634748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:41.241 [2024-11-17 14:54:26.634759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.241 [2024-11-17 14:54:26.634776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:41.241 [2024-11-17 14:54:26.634787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:41.241 [2024-11-17 14:54:26.634801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.241 [2024-11-17 14:54:26.634812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:41.241 [2024-11-17 14:54:26.634826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:41.241 [2024-11-17 14:54:26.634838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.241 [2024-11-17 14:54:26.634852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:41.241 [2024-11-17 14:54:26.634864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:41.241 [2024-11-17 14:54:26.634881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.241 [2024-11-17 14:54:26.634894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:41.241 [2024-11-17 14:54:26.634909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:41.241 [2024-11-17 14:54:26.634940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.241 [2024-11-17 14:54:26.634956] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:41.241 [2024-11-17 14:54:26.634970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:41.241 [2024-11-17 14:54:26.634988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.241 [2024-11-17 14:54:26.635001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.241 [2024-11-17 14:54:26.635017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:41.241 [2024-11-17 14:54:26.635028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:41.241 [2024-11-17 14:54:26.635044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:41.241 [2024-11-17 14:54:26.635056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:41.241 [2024-11-17 14:54:26.635070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:41.241 [2024-11-17 14:54:26.635083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:41.241 [2024-11-17 14:54:26.635100] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:41.241 [2024-11-17 14:54:26.635117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.241 [2024-11-17 14:54:26.635141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:41.241 [2024-11-17 14:54:26.635159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:41.241 [2024-11-17 14:54:26.635175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:41.241 [2024-11-17 14:54:26.635190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:41.241 [2024-11-17 14:54:26.635207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:41.241 [2024-11-17 14:54:26.635220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:41.241 [2024-11-17 14:54:26.635236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:41.241 [2024-11-17 14:54:26.635248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:41.241 [2024-11-17 14:54:26.635263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:41.241 [2024-11-17 14:54:26.635275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:41.241 [2024-11-17 14:54:26.635289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:41.241 [2024-11-17 14:54:26.635306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:41.241 [2024-11-17 14:54:26.635321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:41.241 [2024-11-17 14:54:26.635335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:41.241 [2024-11-17 14:54:26.635351] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:41.241 [2024-11-17 14:54:26.635367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.241 [2024-11-17 14:54:26.635387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:41.241 [2024-11-17 14:54:26.635401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:41.241 [2024-11-17 14:54:26.635418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:41.241 [2024-11-17 14:54:26.635432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:41.241 [2024-11-17 14:54:26.635448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.241 [2024-11-17 14:54:26.635463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:41.241 [2024-11-17 14:54:26.635480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:17:41.241 [2024-11-17 14:54:26.635493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.241 [2024-11-17 14:54:26.667491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.241 [2024-11-17 14:54:26.667555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.241 [2024-11-17 14:54:26.667576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.888 ms 00:17:41.241 [2024-11-17 14:54:26.667589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.241 [2024-11-17 14:54:26.667763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.241 [2024-11-17 14:54:26.667789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:41.241 [2024-11-17 14:54:26.667807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:41.241 [2024-11-17 14:54:26.667820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.241 [2024-11-17 14:54:26.702759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.241 [2024-11-17 14:54:26.702808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.241 [2024-11-17 14:54:26.702834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.903 ms 00:17:41.241 [2024-11-17 14:54:26.702846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.241 [2024-11-17 14:54:26.702978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.241 [2024-11-17 14:54:26.702995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.241 [2024-11-17 14:54:26.703013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.241 [2024-11-17 14:54:26.703026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.241 [2024-11-17 14:54:26.703639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.241 [2024-11-17 14:54:26.703679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.241 [2024-11-17 14:54:26.703699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:17:41.241 [2024-11-17 14:54:26.703712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.241 [2024-11-17 14:54:26.703949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.241 [2024-11-17 14:54:26.703980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.241 [2024-11-17 14:54:26.703997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:17:41.241 [2024-11-17 14:54:26.704009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.241 [2024-11-17 14:54:26.721723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.241 [2024-11-17 14:54:26.721771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.241 [2024-11-17 14:54:26.721788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.675 ms 00:17:41.241 [2024-11-17 14:54:26.721801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.241 [2024-11-17 14:54:26.735976] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:41.241 [2024-11-17 14:54:26.736028] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:41.241 [2024-11-17 14:54:26.736048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.242 [2024-11-17 14:54:26.736062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:41.242 [2024-11-17 14:54:26.736077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.079 ms 00:17:41.242 [2024-11-17 14:54:26.736088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.242 [2024-11-17 14:54:26.761813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.242 [2024-11-17 14:54:26.761867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:41.242 [2024-11-17 14:54:26.761889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.606 ms 00:17:41.242 [2024-11-17 14:54:26.761902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.242 [2024-11-17 14:54:26.774718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.242 [2024-11-17 14:54:26.774768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:41.242 [2024-11-17 14:54:26.774792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.683 ms 00:17:41.242 [2024-11-17 14:54:26.774803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.503 [2024-11-17 14:54:26.787449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.503 [2024-11-17 14:54:26.787497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:41.503 [2024-11-17 14:54:26.787530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.532 ms 00:17:41.503 [2024-11-17 14:54:26.787543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.503 [2024-11-17 14:54:26.788293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.503 [2024-11-17 14:54:26.788335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:41.503 [2024-11-17 14:54:26.788353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:17:41.503 [2024-11-17 14:54:26.788364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.503 [2024-11-17 14:54:26.858655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.503 [2024-11-17 14:54:26.858727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:41.503 [2024-11-17 14:54:26.858756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.246 ms 00:17:41.503 [2024-11-17 14:54:26.858770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.503 [2024-11-17 14:54:26.869948] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:41.503 [2024-11-17 14:54:26.888938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.503 [2024-11-17 14:54:26.889003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:41.503 [2024-11-17 14:54:26.889022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.047 ms 00:17:41.503 [2024-11-17 14:54:26.889036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.503 [2024-11-17 14:54:26.889149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.503 [2024-11-17 14:54:26.889169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:41.503 [2024-11-17 14:54:26.889185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:41.503 [2024-11-17 14:54:26.889202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.503 [2024-11-17 14:54:26.889283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.503 [2024-11-17 14:54:26.889302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:41.503 [2024-11-17 14:54:26.889316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:41.503 [2024-11-17 14:54:26.889340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.503 [2024-11-17 14:54:26.889375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.503 [2024-11-17 14:54:26.889392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:41.503 [2024-11-17 14:54:26.889406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:41.503 [2024-11-17 14:54:26.889421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.503 [2024-11-17 14:54:26.889471] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:41.503 [2024-11-17 14:54:26.889493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.503 [2024-11-17 14:54:26.889511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:41.503 [2024-11-17 14:54:26.889529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:41.503 [2024-11-17 14:54:26.889542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.503 [2024-11-17 14:54:26.916166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.503 [2024-11-17 14:54:26.916220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:41.503 [2024-11-17 14:54:26.916243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.582 ms 00:17:41.503 [2024-11-17 14:54:26.916255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.503 [2024-11-17 14:54:26.916449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.503 [2024-11-17 14:54:26.916476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:41.503 [2024-11-17 14:54:26.916497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:41.503 [2024-11-17 14:54:26.916511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.503 [2024-11-17 14:54:26.917602] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.503 [2024-11-17 14:54:26.921085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 317.570 ms, result 0 00:17:41.503 [2024-11-17 14:54:26.923301] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.503 Some configs were skipped because the RPC state that can call them passed over. 00:17:41.503 14:54:26 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:41.765 [2024-11-17 14:54:27.167961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.765 [2024-11-17 14:54:27.168034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:41.765 [2024-11-17 14:54:27.168055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:17:41.765 [2024-11-17 14:54:27.168071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.765 [2024-11-17 14:54:27.168121] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.315 ms, result 0 00:17:41.765 true 00:17:41.765 14:54:27 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:42.038 [2024-11-17 14:54:27.378275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.038 [2024-11-17 14:54:27.378311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:42.038 [2024-11-17 14:54:27.378325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.325 ms 00:17:42.038 [2024-11-17 14:54:27.378334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.038 [2024-11-17 14:54:27.378373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.419 ms, result 0 00:17:42.038 true 00:17:42.038 14:54:27 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74095 00:17:42.038 14:54:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74095 ']' 00:17:42.038 14:54:27 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74095 00:17:42.038 14:54:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:42.038 14:54:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.038 14:54:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74095 00:17:42.038 14:54:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.038 14:54:27 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.038 14:54:27 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74095' 00:17:42.038 killing process with pid 74095 00:17:42.038 14:54:27 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 74095 00:17:42.039 14:54:27 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 74095 00:17:42.612 [2024-11-17 14:54:27.962212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.612 [2024-11-17 14:54:27.962422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:42.612 [2024-11-17 14:54:27.962497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:42.612 [2024-11-17 14:54:27.962528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.612 [2024-11-17 14:54:27.962595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:42.612 [2024-11-17 14:54:27.964774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.612 [2024-11-17 14:54:27.964885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:42.612 [2024-11-17 14:54:27.964968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:17:42.612 [2024-11-17 14:54:27.965044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.612 [2024-11-17 14:54:27.965316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.612 [2024-11-17 14:54:27.965387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:42.612 [2024-11-17 14:54:27.965454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:17:42.612 [2024-11-17 14:54:27.965518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.612 [2024-11-17 14:54:27.968723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.612 [2024-11-17 14:54:27.968811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:42.612 [2024-11-17 14:54:27.968884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:17:42.612 [2024-11-17 14:54:27.968963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.612 [2024-11-17 14:54:27.974278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.612 [2024-11-17 14:54:27.974367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:42.612 [2024-11-17 14:54:27.974427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.253 ms 00:17:42.612 [2024-11-17 14:54:27.974499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.612 [2024-11-17 14:54:27.981935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.612 [2024-11-17 14:54:27.982018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:42.612 [2024-11-17 14:54:27.982080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.358 ms 00:17:42.612 [2024-11-17 14:54:27.982115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.612 [2024-11-17 14:54:27.988670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.612 [2024-11-17 14:54:27.988756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:42.612 [2024-11-17 14:54:27.988816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.445 ms 00:17:42.612 [2024-11-17 14:54:27.988845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.612 [2024-11-17 14:54:27.988998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.613 [2024-11-17 14:54:27.989031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:42.613 [2024-11-17 14:54:27.989186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:42.613 [2024-11-17 14:54:27.989202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.613 [2024-11-17 14:54:27.997248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.613 [2024-11-17 14:54:27.997327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:42.613 [2024-11-17 14:54:27.997387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.017 ms 00:17:42.613 [2024-11-17 14:54:27.997415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.613 [2024-11-17 14:54:28.004857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.613 [2024-11-17 14:54:28.004952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:42.613 [2024-11-17 14:54:28.005017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.389 ms 00:17:42.613 [2024-11-17 14:54:28.005044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.613 [2024-11-17 14:54:28.012055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.613 [2024-11-17 14:54:28.012135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:42.613 [2024-11-17 14:54:28.012215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.959 ms 00:17:42.613 [2024-11-17 14:54:28.012244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.613 [2024-11-17 14:54:28.019331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.613 [2024-11-17 14:54:28.019412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:42.613 [2024-11-17 14:54:28.019471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.998 ms 00:17:42.613 [2024-11-17 14:54:28.019499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.613 [2024-11-17 14:54:28.019566] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:42.613 [2024-11-17 14:54:28.019879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.019907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.019918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.019952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.019961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.019975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.019985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.019997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:42.613 [2024-11-17 14:54:28.020694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.020995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.021007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:42.614 [2024-11-17 14:54:28.021026] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:42.614 [2024-11-17 14:54:28.021039] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 89855542-54d3-43a5-bee9-bade0b1117b4 00:17:42.614 [2024-11-17 14:54:28.021057] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:42.614 [2024-11-17 14:54:28.021068] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:42.614 [2024-11-17 14:54:28.021077] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:42.614 [2024-11-17 14:54:28.021089] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:42.614 [2024-11-17 14:54:28.021097] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:42.614 [2024-11-17 14:54:28.021109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:42.614 [2024-11-17 14:54:28.021118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:42.614 [2024-11-17 14:54:28.021129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:42.614 [2024-11-17 14:54:28.021138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:42.614 [2024-11-17 14:54:28.021150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.614 [2024-11-17 14:54:28.021161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:42.614 [2024-11-17 14:54:28.021174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.586 ms 00:17:42.614 [2024-11-17 14:54:28.021185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.614 [2024-11-17 14:54:28.031839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.614 [2024-11-17 14:54:28.031934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:42.614 [2024-11-17 14:54:28.032002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.619 ms 00:17:42.614 [2024-11-17 14:54:28.032031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.614 [2024-11-17 14:54:28.032363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.614 [2024-11-17 14:54:28.032437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:42.614 [2024-11-17 14:54:28.032496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:17:42.614 [2024-11-17 14:54:28.032524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.614 [2024-11-17 14:54:28.067133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.614 [2024-11-17 14:54:28.067221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.614 [2024-11-17 14:54:28.067278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.614 [2024-11-17 14:54:28.067306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.614 [2024-11-17 14:54:28.067429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.614 [2024-11-17 14:54:28.067459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.614 [2024-11-17 14:54:28.067488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.614 [2024-11-17 14:54:28.067526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.614 [2024-11-17 14:54:28.067601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.614 [2024-11-17 14:54:28.067693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.614 [2024-11-17 14:54:28.067723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.614 [2024-11-17 14:54:28.067748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.614 [2024-11-17 14:54:28.067789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.614 [2024-11-17 14:54:28.067816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.614 [2024-11-17 14:54:28.067844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.614 [2024-11-17 14:54:28.067935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.614 [2024-11-17 14:54:28.127355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.614 [2024-11-17 14:54:28.127471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.614 [2024-11-17 14:54:28.127544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.614 [2024-11-17 14:54:28.127573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.875 [2024-11-17 14:54:28.176627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.875 [2024-11-17 14:54:28.176731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.875 [2024-11-17 14:54:28.176790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.875 [2024-11-17 14:54:28.176818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.875 [2024-11-17 14:54:28.177792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.875 [2024-11-17 14:54:28.177876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.875 [2024-11-17 14:54:28.177953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.875 [2024-11-17 14:54:28.177984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.875 [2024-11-17 14:54:28.178040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.875 [2024-11-17 14:54:28.178125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.875 [2024-11-17 14:54:28.178163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.875 [2024-11-17 14:54:28.178189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.875 [2024-11-17 14:54:28.178305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.875 [2024-11-17 14:54:28.178374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.875 [2024-11-17 14:54:28.178403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.875 [2024-11-17 14:54:28.178427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.875 [2024-11-17 14:54:28.178491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.875 [2024-11-17 14:54:28.178568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:42.875 [2024-11-17 14:54:28.178597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.875 [2024-11-17 14:54:28.178622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.875 [2024-11-17 14:54:28.178726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.875 [2024-11-17 14:54:28.178818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.875 [2024-11-17 14:54:28.178885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.875 [2024-11-17 14:54:28.178914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.875 [2024-11-17 14:54:28.178996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.875 [2024-11-17 14:54:28.179030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.875 [2024-11-17 14:54:28.179058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.875 [2024-11-17 14:54:28.179121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.875 [2024-11-17 14:54:28.179291] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 217.050 ms, result 0 00:17:43.446 14:54:28 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:43.446 [2024-11-17 14:54:28.760539] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:43.446 [2024-11-17 14:54:28.761367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74150 ] 00:17:43.446 [2024-11-17 14:54:28.925846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.708 [2024-11-17 14:54:28.999026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.708 [2024-11-17 14:54:29.203349] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.708 [2024-11-17 14:54:29.203557] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.969 [2024-11-17 14:54:29.355585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.969 [2024-11-17 14:54:29.355707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:43.969 [2024-11-17 14:54:29.355778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:43.969 [2024-11-17 14:54:29.355808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.969 [2024-11-17 14:54:29.357941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.969 [2024-11-17 14:54:29.358036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:43.969 [2024-11-17 14:54:29.358103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.093 ms 00:17:43.969 [2024-11-17 14:54:29.358132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.969 [2024-11-17 14:54:29.358313] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:43.969 [2024-11-17 14:54:29.358900] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:43.969 [2024-11-17 14:54:29.359007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.969 [2024-11-17 14:54:29.359060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:43.969 [2024-11-17 14:54:29.359090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:17:43.969 [2024-11-17 14:54:29.359220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.969 [2024-11-17 14:54:29.360249] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:43.969 [2024-11-17 14:54:29.369902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.969 [2024-11-17 14:54:29.369998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:43.969 [2024-11-17 14:54:29.370059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.653 ms 00:17:43.969 [2024-11-17 14:54:29.370087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.969 [2024-11-17 14:54:29.370219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.969 [2024-11-17 14:54:29.370261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:43.969 [2024-11-17 14:54:29.370288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:43.969 [2024-11-17 14:54:29.370356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.969 [2024-11-17 14:54:29.374708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.969 [2024-11-17 14:54:29.374797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:43.969 [2024-11-17 14:54:29.374855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.281 ms 00:17:43.969 [2024-11-17 14:54:29.374883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.969 [2024-11-17 14:54:29.374998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.969 [2024-11-17 14:54:29.375030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:43.969 [2024-11-17 14:54:29.375056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:43.969 [2024-11-17 14:54:29.375118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.969 [2024-11-17 14:54:29.375213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.969 [2024-11-17 14:54:29.375252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:43.969 [2024-11-17 14:54:29.375350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:43.969 [2024-11-17 14:54:29.375378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.969 [2024-11-17 14:54:29.375542] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:43.969 [2024-11-17 14:54:29.378348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.969 [2024-11-17 14:54:29.378428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:43.969 [2024-11-17 14:54:29.378485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.814 ms 00:17:43.969 [2024-11-17 14:54:29.378514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.969 [2024-11-17 14:54:29.378571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.969 [2024-11-17 14:54:29.378630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:43.969 [2024-11-17 14:54:29.378657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:43.969 [2024-11-17 14:54:29.378717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.969 [2024-11-17 14:54:29.378763] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:43.969 [2024-11-17 14:54:29.378844] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:43.969 [2024-11-17 14:54:29.378915] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:43.969 [2024-11-17 14:54:29.379010] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:43.969 [2024-11-17 14:54:29.379149] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:43.969 [2024-11-17 14:54:29.379193] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:43.969 [2024-11-17 14:54:29.379271] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:43.969 [2024-11-17 14:54:29.379315] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:43.969 [2024-11-17 14:54:29.379363] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:43.969 [2024-11-17 14:54:29.379473] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:43.969 [2024-11-17 14:54:29.379500] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:43.969 [2024-11-17 14:54:29.379542] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:43.969 [2024-11-17 14:54:29.379569] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:43.969 [2024-11-17 14:54:29.379638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.969 [2024-11-17 14:54:29.379668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:43.969 [2024-11-17 14:54:29.379694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:17:43.969 [2024-11-17 14:54:29.379720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.969 [2024-11-17 14:54:29.379851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.969 [2024-11-17 14:54:29.379885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:43.969 [2024-11-17 14:54:29.379915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:43.969 [2024-11-17 14:54:29.379949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.969 [2024-11-17 14:54:29.380077] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:43.969 [2024-11-17 14:54:29.380116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:43.969 [2024-11-17 14:54:29.380144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.969 [2024-11-17 14:54:29.380170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.969 [2024-11-17 14:54:29.380242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:43.969 [2024-11-17 14:54:29.380272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:43.969 [2024-11-17 14:54:29.380297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:43.969 [2024-11-17 14:54:29.380322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:43.969 [2024-11-17 14:54:29.380347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:43.969 [2024-11-17 14:54:29.380415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.969 [2024-11-17 14:54:29.380444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:43.969 [2024-11-17 14:54:29.380469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:43.969 [2024-11-17 14:54:29.380494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.969 [2024-11-17 14:54:29.380525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:43.970 [2024-11-17 14:54:29.380609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:43.970 [2024-11-17 14:54:29.380639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.970 [2024-11-17 14:54:29.380664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:43.970 [2024-11-17 14:54:29.380692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:43.970 [2024-11-17 14:54:29.380716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.970 [2024-11-17 14:54:29.380791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:43.970 [2024-11-17 14:54:29.380817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:43.970 [2024-11-17 14:54:29.380842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.970 [2024-11-17 14:54:29.380867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:43.970 [2024-11-17 14:54:29.380952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:43.970 [2024-11-17 14:54:29.380984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.970 [2024-11-17 14:54:29.381093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:43.970 [2024-11-17 14:54:29.381123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:43.970 [2024-11-17 14:54:29.381148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.970 [2024-11-17 14:54:29.381172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:43.970 [2024-11-17 14:54:29.381198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:43.970 [2024-11-17 14:54:29.381264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.970 [2024-11-17 14:54:29.381293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:43.970 [2024-11-17 14:54:29.381318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:43.970 [2024-11-17 14:54:29.381344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.970 [2024-11-17 14:54:29.381368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:43.970 [2024-11-17 14:54:29.381433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:43.970 [2024-11-17 14:54:29.381461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.970 [2024-11-17 14:54:29.381487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:43.970 [2024-11-17 14:54:29.381512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:43.970 [2024-11-17 14:54:29.381537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.970 [2024-11-17 14:54:29.381597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:43.970 [2024-11-17 14:54:29.381626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:43.970 [2024-11-17 14:54:29.381637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.970 [2024-11-17 14:54:29.381647] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:43.970 [2024-11-17 14:54:29.381657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:43.970 [2024-11-17 14:54:29.381667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.970 [2024-11-17 14:54:29.381681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.970 [2024-11-17 14:54:29.381691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:43.970 [2024-11-17 14:54:29.381701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:43.970 [2024-11-17 14:54:29.381709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:43.970 [2024-11-17 14:54:29.381719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:43.970 [2024-11-17 14:54:29.381727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:43.970 [2024-11-17 14:54:29.381737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:43.970 [2024-11-17 14:54:29.381747] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:43.970 [2024-11-17 14:54:29.381759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.970 [2024-11-17 14:54:29.381770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:43.970 [2024-11-17 14:54:29.381780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:43.970 [2024-11-17 14:54:29.381789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:43.970 [2024-11-17 14:54:29.381799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:43.970 [2024-11-17 14:54:29.381808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:43.970 [2024-11-17 14:54:29.381818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:43.970 [2024-11-17 14:54:29.381827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:43.970 [2024-11-17 14:54:29.381836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:43.970 [2024-11-17 14:54:29.381845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:43.970 [2024-11-17 14:54:29.381855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:43.970 [2024-11-17 14:54:29.381864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:43.970 [2024-11-17 14:54:29.381874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:43.970 [2024-11-17 14:54:29.381883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:43.970 [2024-11-17 14:54:29.381893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:43.970 [2024-11-17 14:54:29.381902] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:43.970 [2024-11-17 14:54:29.381912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.970 [2024-11-17 14:54:29.381934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:43.970 [2024-11-17 14:54:29.381945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:43.970 [2024-11-17 14:54:29.381955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:43.970 [2024-11-17 14:54:29.381965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:43.970 [2024-11-17 14:54:29.381975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.970 [2024-11-17 14:54:29.381985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:43.970 [2024-11-17 14:54:29.381998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.973 ms 00:17:43.970 [2024-11-17 14:54:29.382007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.970 [2024-11-17 14:54:29.402801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.970 [2024-11-17 14:54:29.402906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:43.970 [2024-11-17 14:54:29.402942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.715 ms 00:17:43.970 [2024-11-17 14:54:29.402952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.970 [2024-11-17 14:54:29.403073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.970 [2024-11-17 14:54:29.403089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:43.970 [2024-11-17 14:54:29.403100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:43.970 [2024-11-17 14:54:29.403110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.970 [2024-11-17 14:54:29.441976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.970 [2024-11-17 14:54:29.442075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:43.970 [2024-11-17 14:54:29.442093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.843 ms 00:17:43.970 [2024-11-17 14:54:29.442106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.970 [2024-11-17 14:54:29.442176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.970 [2024-11-17 14:54:29.442189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.970 [2024-11-17 14:54:29.442200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:43.970 [2024-11-17 14:54:29.442210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.970 [2024-11-17 14:54:29.442516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.970 [2024-11-17 14:54:29.442538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.970 [2024-11-17 14:54:29.442548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:17:43.970 [2024-11-17 14:54:29.442556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.970 [2024-11-17 14:54:29.442702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.970 [2024-11-17 14:54:29.442718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.970 [2024-11-17 14:54:29.442729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:17:43.970 [2024-11-17 14:54:29.442737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.970 [2024-11-17 14:54:29.453487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.970 [2024-11-17 14:54:29.453577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.970 [2024-11-17 14:54:29.453593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.727 ms 00:17:43.970 [2024-11-17 14:54:29.453601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.970 [2024-11-17 14:54:29.463239] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:43.970 [2024-11-17 14:54:29.463266] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:43.970 [2024-11-17 14:54:29.463279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.970 [2024-11-17 14:54:29.463287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:43.971 [2024-11-17 14:54:29.463297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.556 ms 00:17:43.971 [2024-11-17 14:54:29.463305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.971 [2024-11-17 14:54:29.481789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.971 [2024-11-17 14:54:29.481825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:43.971 [2024-11-17 14:54:29.481838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.427 ms 00:17:43.971 [2024-11-17 14:54:29.481848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.971 [2024-11-17 14:54:29.490652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.971 [2024-11-17 14:54:29.490681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:43.971 [2024-11-17 14:54:29.490692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.735 ms 00:17:43.971 [2024-11-17 14:54:29.490701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.971 [2024-11-17 14:54:29.499405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.971 [2024-11-17 14:54:29.499432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:43.971 [2024-11-17 14:54:29.499442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.652 ms 00:17:43.971 [2024-11-17 14:54:29.499451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.971 [2024-11-17 14:54:29.500019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.971 [2024-11-17 14:54:29.500106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:43.971 [2024-11-17 14:54:29.500121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:17:43.971 [2024-11-17 14:54:29.500131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.231 [2024-11-17 14:54:29.543163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.232 [2024-11-17 14:54:29.543296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:44.232 [2024-11-17 14:54:29.543314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.004 ms 00:17:44.232 [2024-11-17 14:54:29.543323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.232 [2024-11-17 14:54:29.551104] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:44.232 [2024-11-17 14:54:29.562577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.232 [2024-11-17 14:54:29.562680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.232 [2024-11-17 14:54:29.562696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.178 ms 00:17:44.232 [2024-11-17 14:54:29.562710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.232 [2024-11-17 14:54:29.562804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.232 [2024-11-17 14:54:29.562817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:44.232 [2024-11-17 14:54:29.562827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:44.232 [2024-11-17 14:54:29.562837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.232 [2024-11-17 14:54:29.562890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.232 [2024-11-17 14:54:29.562901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.232 [2024-11-17 14:54:29.562912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:44.232 [2024-11-17 14:54:29.562943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.232 [2024-11-17 14:54:29.562973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.232 [2024-11-17 14:54:29.562984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.232 [2024-11-17 14:54:29.562994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:44.232 [2024-11-17 14:54:29.563004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.232 [2024-11-17 14:54:29.563035] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:44.232 [2024-11-17 14:54:29.563047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.232 [2024-11-17 14:54:29.563057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:44.232 [2024-11-17 14:54:29.563067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:44.232 [2024-11-17 14:54:29.563076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.232 [2024-11-17 14:54:29.581079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.232 [2024-11-17 14:54:29.581175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.232 [2024-11-17 14:54:29.581193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.977 ms 00:17:44.232 [2024-11-17 14:54:29.581202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.232 [2024-11-17 14:54:29.581293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.232 [2024-11-17 14:54:29.581306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.232 [2024-11-17 14:54:29.581317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:44.232 [2024-11-17 14:54:29.581326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.232 [2024-11-17 14:54:29.581981] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.232 [2024-11-17 14:54:29.584312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.158 ms, result 0 00:17:44.232 [2024-11-17 14:54:29.584996] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.232 [2024-11-17 14:54:29.599793] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:45.173  [2024-11-17T14:54:31.657Z] Copying: 27/256 [MB] (27 MBps) [2024-11-17T14:54:33.042Z] Copying: 60/256 [MB] (32 MBps) [2024-11-17T14:54:33.986Z] Copying: 80/256 [MB] (19 MBps) [2024-11-17T14:54:34.931Z] Copying: 96/256 [MB] (16 MBps) [2024-11-17T14:54:35.876Z] Copying: 110/256 [MB] (13 MBps) [2024-11-17T14:54:36.820Z] Copying: 130/256 [MB] (20 MBps) [2024-11-17T14:54:37.801Z] Copying: 150/256 [MB] (19 MBps) [2024-11-17T14:54:38.745Z] Copying: 171/256 [MB] (21 MBps) [2024-11-17T14:54:39.689Z] Copying: 210/256 [MB] (39 MBps) [2024-11-17T14:54:41.074Z] Copying: 232/256 [MB] (21 MBps) [2024-11-17T14:54:41.074Z] Copying: 246/256 [MB] (13 MBps) [2024-11-17T14:54:41.648Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-17 14:54:41.440416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:56.105 [2024-11-17 14:54:41.454378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.105 [2024-11-17 14:54:41.454435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:56.105 [2024-11-17 14:54:41.454451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:56.105 [2024-11-17 14:54:41.454476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.105 [2024-11-17 14:54:41.454502] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:56.105 [2024-11-17 14:54:41.457580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.105 [2024-11-17 14:54:41.457619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:56.105 [2024-11-17 14:54:41.457632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.062 ms 00:17:56.105 [2024-11-17 14:54:41.457642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.105 [2024-11-17 14:54:41.457947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.105 [2024-11-17 14:54:41.457959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:56.105 [2024-11-17 14:54:41.457969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:17:56.105 [2024-11-17 14:54:41.457978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.105 [2024-11-17 14:54:41.461695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.105 [2024-11-17 14:54:41.461948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:56.105 [2024-11-17 14:54:41.461966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.702 ms 00:17:56.105 [2024-11-17 14:54:41.461975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.105 [2024-11-17 14:54:41.468886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.105 [2024-11-17 14:54:41.468935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:56.105 [2024-11-17 14:54:41.468947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.880 ms 00:17:56.105 [2024-11-17 14:54:41.468955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.105 [2024-11-17 14:54:41.494789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.105 [2024-11-17 14:54:41.494835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:56.105 [2024-11-17 14:54:41.494849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.762 ms 00:17:56.105 [2024-11-17 14:54:41.494857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.105 [2024-11-17 14:54:41.510733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.105 [2024-11-17 14:54:41.510778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:56.105 [2024-11-17 14:54:41.510798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.823 ms 00:17:56.105 [2024-11-17 14:54:41.510807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.105 [2024-11-17 14:54:41.510993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.105 [2024-11-17 14:54:41.511006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:56.105 [2024-11-17 14:54:41.511016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:17:56.105 [2024-11-17 14:54:41.511024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.105 [2024-11-17 14:54:41.536390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.105 [2024-11-17 14:54:41.536569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:56.105 [2024-11-17 14:54:41.536591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.337 ms 00:17:56.105 [2024-11-17 14:54:41.536599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.105 [2024-11-17 14:54:41.561606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.105 [2024-11-17 14:54:41.561650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:56.105 [2024-11-17 14:54:41.561663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.950 ms 00:17:56.105 [2024-11-17 14:54:41.561671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.105 [2024-11-17 14:54:41.586113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.105 [2024-11-17 14:54:41.586157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:56.105 [2024-11-17 14:54:41.586170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.380 ms 00:17:56.105 [2024-11-17 14:54:41.586178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.105 [2024-11-17 14:54:41.610404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.106 [2024-11-17 14:54:41.610445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:56.106 [2024-11-17 14:54:41.610457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.143 ms 00:17:56.106 [2024-11-17 14:54:41.610465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.106 [2024-11-17 14:54:41.610512] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:56.106 [2024-11-17 14:54:41.610529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.610992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:56.106 [2024-11-17 14:54:41.611159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:56.107 [2024-11-17 14:54:41.611385] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:56.107 [2024-11-17 14:54:41.611393] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 89855542-54d3-43a5-bee9-bade0b1117b4 00:17:56.107 [2024-11-17 14:54:41.611402] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:56.107 [2024-11-17 14:54:41.611410] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:56.107 [2024-11-17 14:54:41.611419] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:56.107 [2024-11-17 14:54:41.611428] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:56.107 [2024-11-17 14:54:41.611436] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:56.107 [2024-11-17 14:54:41.611444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:56.107 [2024-11-17 14:54:41.611456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:56.107 [2024-11-17 14:54:41.611463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:56.107 [2024-11-17 14:54:41.611470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:56.107 [2024-11-17 14:54:41.611477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.107 [2024-11-17 14:54:41.611485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:56.107 [2024-11-17 14:54:41.611495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:17:56.107 [2024-11-17 14:54:41.611502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.107 [2024-11-17 14:54:41.625094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.107 [2024-11-17 14:54:41.625139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:56.107 [2024-11-17 14:54:41.625150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.546 ms 00:17:56.107 [2024-11-17 14:54:41.625159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.107 [2024-11-17 14:54:41.625572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.107 [2024-11-17 14:54:41.625590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:56.107 [2024-11-17 14:54:41.625600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:17:56.107 [2024-11-17 14:54:41.625608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.664493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.369 [2024-11-17 14:54:41.664542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.369 [2024-11-17 14:54:41.664554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.369 [2024-11-17 14:54:41.664569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.664683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.369 [2024-11-17 14:54:41.664694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.369 [2024-11-17 14:54:41.664703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.369 [2024-11-17 14:54:41.664710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.664759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.369 [2024-11-17 14:54:41.664769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.369 [2024-11-17 14:54:41.664777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.369 [2024-11-17 14:54:41.664785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.664806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.369 [2024-11-17 14:54:41.664815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.369 [2024-11-17 14:54:41.664823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.369 [2024-11-17 14:54:41.664831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.749480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.369 [2024-11-17 14:54:41.749536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.369 [2024-11-17 14:54:41.749551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.369 [2024-11-17 14:54:41.749561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.817317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.369 [2024-11-17 14:54:41.817353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.369 [2024-11-17 14:54:41.817363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.369 [2024-11-17 14:54:41.817371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.817436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.369 [2024-11-17 14:54:41.817445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.369 [2024-11-17 14:54:41.817453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.369 [2024-11-17 14:54:41.817461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.817488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.369 [2024-11-17 14:54:41.817500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.369 [2024-11-17 14:54:41.817507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.369 [2024-11-17 14:54:41.817515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.817602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.369 [2024-11-17 14:54:41.817611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.369 [2024-11-17 14:54:41.817619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.369 [2024-11-17 14:54:41.817627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.817656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.369 [2024-11-17 14:54:41.817665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:56.369 [2024-11-17 14:54:41.817676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.369 [2024-11-17 14:54:41.817683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.817719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.369 [2024-11-17 14:54:41.817727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.369 [2024-11-17 14:54:41.817735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.369 [2024-11-17 14:54:41.817742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.817781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.369 [2024-11-17 14:54:41.817794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.369 [2024-11-17 14:54:41.817801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.369 [2024-11-17 14:54:41.817808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.369 [2024-11-17 14:54:41.817961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 363.565 ms, result 0 00:17:57.313 00:17:57.313 00:17:57.313 14:54:42 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:57.574 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:57.574 14:54:43 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:57.574 14:54:43 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:57.574 14:54:43 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:57.574 14:54:43 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:57.574 14:54:43 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:57.835 14:54:43 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:57.835 Process with pid 74095 is not found 00:17:57.835 14:54:43 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74095 00:17:57.835 14:54:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74095 ']' 00:17:57.835 14:54:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74095 00:17:57.835 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74095) - No such process 00:17:57.835 14:54:43 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 74095 is not found' 00:17:57.835 ************************************ 00:17:57.835 END TEST ftl_trim 00:17:57.835 ************************************ 00:17:57.835 00:17:57.835 real 1m12.189s 00:17:57.835 user 1m28.385s 00:17:57.835 sys 0m15.131s 00:17:57.835 14:54:43 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.835 14:54:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:57.836 14:54:43 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:57.836 14:54:43 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:57.836 14:54:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.836 14:54:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:57.836 ************************************ 00:17:57.836 START TEST ftl_restore 00:17:57.836 ************************************ 00:17:57.836 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:57.836 * Looking for test storage... 00:17:57.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:57.836 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:57.836 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:17:57.836 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:58.098 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.098 14:54:43 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:58.098 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.098 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:58.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.098 --rc genhtml_branch_coverage=1 00:17:58.098 --rc genhtml_function_coverage=1 00:17:58.098 --rc genhtml_legend=1 00:17:58.098 --rc geninfo_all_blocks=1 00:17:58.098 --rc geninfo_unexecuted_blocks=1 00:17:58.098 00:17:58.098 ' 00:17:58.098 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:58.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.098 --rc genhtml_branch_coverage=1 00:17:58.098 --rc genhtml_function_coverage=1 00:17:58.098 --rc genhtml_legend=1 00:17:58.098 --rc geninfo_all_blocks=1 00:17:58.098 --rc geninfo_unexecuted_blocks=1 00:17:58.098 00:17:58.098 ' 00:17:58.098 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:58.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.098 --rc genhtml_branch_coverage=1 00:17:58.098 --rc genhtml_function_coverage=1 00:17:58.098 --rc genhtml_legend=1 00:17:58.098 --rc geninfo_all_blocks=1 00:17:58.098 --rc geninfo_unexecuted_blocks=1 00:17:58.098 00:17:58.098 ' 00:17:58.098 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:58.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.098 --rc genhtml_branch_coverage=1 00:17:58.098 --rc genhtml_function_coverage=1 00:17:58.098 --rc genhtml_legend=1 00:17:58.098 --rc geninfo_all_blocks=1 00:17:58.098 --rc geninfo_unexecuted_blocks=1 00:17:58.098 00:17:58.098 ' 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.XnIPB8GHpU 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74365 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74365 00:17:58.098 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 74365 ']' 00:17:58.098 14:54:43 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.098 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.099 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.099 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.099 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.099 14:54:43 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:58.099 [2024-11-17 14:54:43.501531] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:58.099 [2024-11-17 14:54:43.501936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74365 ] 00:17:58.360 [2024-11-17 14:54:43.666723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.360 [2024-11-17 14:54:43.784357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.932 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.932 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:17:58.932 14:54:44 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:58.932 14:54:44 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:58.932 14:54:44 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:58.932 14:54:44 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:58.932 14:54:44 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:58.932 14:54:44 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:59.505 14:54:44 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:59.505 14:54:44 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:59.505 14:54:44 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:59.505 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:59.505 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:59.505 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:59.505 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:59.505 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:59.505 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:59.505 { 00:17:59.505 "name": "nvme0n1", 00:17:59.505 "aliases": [ 00:17:59.505 "81360a1a-8ad2-46cb-8519-3b044ebc3c4d" 00:17:59.505 ], 00:17:59.505 "product_name": "NVMe disk", 00:17:59.505 "block_size": 4096, 00:17:59.505 "num_blocks": 1310720, 00:17:59.505 "uuid": "81360a1a-8ad2-46cb-8519-3b044ebc3c4d", 00:17:59.505 "numa_id": -1, 00:17:59.505 "assigned_rate_limits": { 00:17:59.505 "rw_ios_per_sec": 0, 00:17:59.505 "rw_mbytes_per_sec": 0, 00:17:59.505 "r_mbytes_per_sec": 0, 00:17:59.505 "w_mbytes_per_sec": 0 00:17:59.505 }, 00:17:59.505 "claimed": true, 00:17:59.505 "claim_type": "read_many_write_one", 00:17:59.505 "zoned": false, 00:17:59.505 "supported_io_types": { 00:17:59.505 "read": true, 00:17:59.505 "write": true, 00:17:59.505 "unmap": true, 00:17:59.505 "flush": true, 00:17:59.505 "reset": true, 00:17:59.505 "nvme_admin": true, 00:17:59.505 "nvme_io": true, 00:17:59.505 "nvme_io_md": false, 00:17:59.505 "write_zeroes": true, 00:17:59.505 "zcopy": false, 00:17:59.505 "get_zone_info": false, 00:17:59.505 "zone_management": false, 00:17:59.505 "zone_append": false, 00:17:59.505 "compare": true, 00:17:59.505 "compare_and_write": false, 00:17:59.505 "abort": true, 00:17:59.505 "seek_hole": false, 00:17:59.505 "seek_data": false, 00:17:59.505 "copy": true, 00:17:59.505 "nvme_iov_md": false 00:17:59.505 }, 00:17:59.505 "driver_specific": { 00:17:59.505 "nvme": [ 00:17:59.505 { 00:17:59.505 "pci_address": "0000:00:11.0", 00:17:59.505 "trid": { 00:17:59.505 "trtype": "PCIe", 00:17:59.505 "traddr": "0000:00:11.0" 00:17:59.505 }, 00:17:59.505 "ctrlr_data": { 00:17:59.505 "cntlid": 0, 00:17:59.505 "vendor_id": "0x1b36", 00:17:59.505 "model_number": "QEMU NVMe Ctrl", 00:17:59.505 "serial_number": "12341", 00:17:59.505 "firmware_revision": "8.0.0", 00:17:59.505 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:59.505 "oacs": { 00:17:59.505 "security": 0, 00:17:59.505 "format": 1, 00:17:59.505 "firmware": 0, 00:17:59.505 "ns_manage": 1 00:17:59.505 }, 00:17:59.505 "multi_ctrlr": false, 00:17:59.505 "ana_reporting": false 00:17:59.505 }, 00:17:59.505 "vs": { 00:17:59.505 "nvme_version": "1.4" 00:17:59.505 }, 00:17:59.505 "ns_data": { 00:17:59.505 "id": 1, 00:17:59.505 "can_share": false 00:17:59.505 } 00:17:59.505 } 00:17:59.505 ], 00:17:59.505 "mp_policy": "active_passive" 00:17:59.505 } 00:17:59.505 } 00:17:59.505 ]' 00:17:59.505 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:59.505 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:59.505 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:59.505 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:59.505 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:59.505 14:54:44 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:17:59.505 14:54:44 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:59.505 14:54:44 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:59.505 14:54:44 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:59.505 14:54:44 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:59.505 14:54:44 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:59.766 14:54:45 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=253810e9-bd76-4310-838e-2bc97feea287 00:17:59.767 14:54:45 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:59.767 14:54:45 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 253810e9-bd76-4310-838e-2bc97feea287 00:18:00.028 14:54:45 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:00.289 14:54:45 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=efd7472e-a6da-49e6-aeae-5788363a0b0d 00:18:00.289 14:54:45 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u efd7472e-a6da-49e6-aeae-5788363a0b0d 00:18:00.289 14:54:45 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=6ab47100-1d20-4323-99d1-b8bd233ed3a7 00:18:00.289 14:54:45 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:00.551 14:54:45 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6ab47100-1d20-4323-99d1-b8bd233ed3a7 00:18:00.551 14:54:45 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:00.551 14:54:45 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:00.551 14:54:45 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=6ab47100-1d20-4323-99d1-b8bd233ed3a7 00:18:00.551 14:54:45 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:00.551 14:54:45 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 6ab47100-1d20-4323-99d1-b8bd233ed3a7 00:18:00.551 14:54:45 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6ab47100-1d20-4323-99d1-b8bd233ed3a7 00:18:00.551 14:54:45 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:00.551 14:54:45 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:00.551 14:54:45 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:00.551 14:54:45 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ab47100-1d20-4323-99d1-b8bd233ed3a7 00:18:00.551 14:54:45 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:00.551 { 00:18:00.551 "name": "6ab47100-1d20-4323-99d1-b8bd233ed3a7", 00:18:00.551 "aliases": [ 00:18:00.551 "lvs/nvme0n1p0" 00:18:00.551 ], 00:18:00.551 "product_name": "Logical Volume", 00:18:00.551 "block_size": 4096, 00:18:00.551 "num_blocks": 26476544, 00:18:00.551 "uuid": "6ab47100-1d20-4323-99d1-b8bd233ed3a7", 00:18:00.551 "assigned_rate_limits": { 00:18:00.551 "rw_ios_per_sec": 0, 00:18:00.551 "rw_mbytes_per_sec": 0, 00:18:00.551 "r_mbytes_per_sec": 0, 00:18:00.551 "w_mbytes_per_sec": 0 00:18:00.551 }, 00:18:00.551 "claimed": false, 00:18:00.551 "zoned": false, 00:18:00.551 "supported_io_types": { 00:18:00.551 "read": true, 00:18:00.551 "write": true, 00:18:00.551 "unmap": true, 00:18:00.551 "flush": false, 00:18:00.551 "reset": true, 00:18:00.551 "nvme_admin": false, 00:18:00.551 "nvme_io": false, 00:18:00.551 "nvme_io_md": false, 00:18:00.551 "write_zeroes": true, 00:18:00.551 "zcopy": false, 00:18:00.551 "get_zone_info": false, 00:18:00.551 "zone_management": false, 00:18:00.551 "zone_append": false, 00:18:00.551 "compare": false, 00:18:00.551 "compare_and_write": false, 00:18:00.551 "abort": false, 00:18:00.551 "seek_hole": true, 00:18:00.551 "seek_data": true, 00:18:00.551 "copy": false, 00:18:00.551 "nvme_iov_md": false 00:18:00.551 }, 00:18:00.551 "driver_specific": { 00:18:00.551 "lvol": { 00:18:00.551 "lvol_store_uuid": "efd7472e-a6da-49e6-aeae-5788363a0b0d", 00:18:00.551 "base_bdev": "nvme0n1", 00:18:00.551 "thin_provision": true, 00:18:00.551 "num_allocated_clusters": 0, 00:18:00.551 "snapshot": false, 00:18:00.551 "clone": false, 00:18:00.551 "esnap_clone": false 00:18:00.551 } 00:18:00.551 } 00:18:00.551 } 00:18:00.551 ]' 00:18:00.551 14:54:45 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:00.551 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:00.551 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:00.551 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:00.551 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:00.551 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:00.551 14:54:46 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:00.551 14:54:46 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:00.551 14:54:46 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:00.812 14:54:46 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:00.812 14:54:46 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:00.812 14:54:46 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 6ab47100-1d20-4323-99d1-b8bd233ed3a7 00:18:00.812 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6ab47100-1d20-4323-99d1-b8bd233ed3a7 00:18:00.812 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:00.812 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:00.812 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:00.812 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ab47100-1d20-4323-99d1-b8bd233ed3a7 00:18:01.074 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:01.074 { 00:18:01.074 "name": "6ab47100-1d20-4323-99d1-b8bd233ed3a7", 00:18:01.074 "aliases": [ 00:18:01.074 "lvs/nvme0n1p0" 00:18:01.074 ], 00:18:01.074 "product_name": "Logical Volume", 00:18:01.074 "block_size": 4096, 00:18:01.074 "num_blocks": 26476544, 00:18:01.074 "uuid": "6ab47100-1d20-4323-99d1-b8bd233ed3a7", 00:18:01.074 "assigned_rate_limits": { 00:18:01.074 "rw_ios_per_sec": 0, 00:18:01.074 "rw_mbytes_per_sec": 0, 00:18:01.074 "r_mbytes_per_sec": 0, 00:18:01.074 "w_mbytes_per_sec": 0 00:18:01.074 }, 00:18:01.074 "claimed": false, 00:18:01.074 "zoned": false, 00:18:01.074 "supported_io_types": { 00:18:01.074 "read": true, 00:18:01.074 "write": true, 00:18:01.074 "unmap": true, 00:18:01.074 "flush": false, 00:18:01.074 "reset": true, 00:18:01.074 "nvme_admin": false, 00:18:01.074 "nvme_io": false, 00:18:01.074 "nvme_io_md": false, 00:18:01.074 "write_zeroes": true, 00:18:01.074 "zcopy": false, 00:18:01.074 "get_zone_info": false, 00:18:01.074 "zone_management": false, 00:18:01.074 "zone_append": false, 00:18:01.074 "compare": false, 00:18:01.074 "compare_and_write": false, 00:18:01.074 "abort": false, 00:18:01.074 "seek_hole": true, 00:18:01.074 "seek_data": true, 00:18:01.074 "copy": false, 00:18:01.074 "nvme_iov_md": false 00:18:01.075 }, 00:18:01.075 "driver_specific": { 00:18:01.075 "lvol": { 00:18:01.075 "lvol_store_uuid": "efd7472e-a6da-49e6-aeae-5788363a0b0d", 00:18:01.075 "base_bdev": "nvme0n1", 00:18:01.075 "thin_provision": true, 00:18:01.075 "num_allocated_clusters": 0, 00:18:01.075 "snapshot": false, 00:18:01.075 "clone": false, 00:18:01.075 "esnap_clone": false 00:18:01.075 } 00:18:01.075 } 00:18:01.075 } 00:18:01.075 ]' 00:18:01.075 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:01.075 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:01.075 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:01.075 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:01.075 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:01.075 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:01.075 14:54:46 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:01.075 14:54:46 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:01.336 14:54:46 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:01.336 14:54:46 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 6ab47100-1d20-4323-99d1-b8bd233ed3a7 00:18:01.336 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6ab47100-1d20-4323-99d1-b8bd233ed3a7 00:18:01.336 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:01.336 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:01.336 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:01.336 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ab47100-1d20-4323-99d1-b8bd233ed3a7 00:18:01.599 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:01.599 { 00:18:01.599 "name": "6ab47100-1d20-4323-99d1-b8bd233ed3a7", 00:18:01.599 "aliases": [ 00:18:01.599 "lvs/nvme0n1p0" 00:18:01.599 ], 00:18:01.599 "product_name": "Logical Volume", 00:18:01.599 "block_size": 4096, 00:18:01.599 "num_blocks": 26476544, 00:18:01.599 "uuid": "6ab47100-1d20-4323-99d1-b8bd233ed3a7", 00:18:01.599 "assigned_rate_limits": { 00:18:01.599 "rw_ios_per_sec": 0, 00:18:01.599 "rw_mbytes_per_sec": 0, 00:18:01.599 "r_mbytes_per_sec": 0, 00:18:01.599 "w_mbytes_per_sec": 0 00:18:01.599 }, 00:18:01.599 "claimed": false, 00:18:01.599 "zoned": false, 00:18:01.599 "supported_io_types": { 00:18:01.599 "read": true, 00:18:01.599 "write": true, 00:18:01.599 "unmap": true, 00:18:01.599 "flush": false, 00:18:01.599 "reset": true, 00:18:01.599 "nvme_admin": false, 00:18:01.599 "nvme_io": false, 00:18:01.599 "nvme_io_md": false, 00:18:01.599 "write_zeroes": true, 00:18:01.599 "zcopy": false, 00:18:01.599 "get_zone_info": false, 00:18:01.599 "zone_management": false, 00:18:01.599 "zone_append": false, 00:18:01.599 "compare": false, 00:18:01.599 "compare_and_write": false, 00:18:01.599 "abort": false, 00:18:01.599 "seek_hole": true, 00:18:01.599 "seek_data": true, 00:18:01.599 "copy": false, 00:18:01.599 "nvme_iov_md": false 00:18:01.599 }, 00:18:01.599 "driver_specific": { 00:18:01.599 "lvol": { 00:18:01.599 "lvol_store_uuid": "efd7472e-a6da-49e6-aeae-5788363a0b0d", 00:18:01.599 "base_bdev": "nvme0n1", 00:18:01.599 "thin_provision": true, 00:18:01.599 "num_allocated_clusters": 0, 00:18:01.599 "snapshot": false, 00:18:01.599 "clone": false, 00:18:01.599 "esnap_clone": false 00:18:01.599 } 00:18:01.599 } 00:18:01.599 } 00:18:01.599 ]' 00:18:01.599 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:01.599 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:01.599 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:01.599 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:01.599 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:01.599 14:54:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:01.599 14:54:46 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:01.599 14:54:46 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6ab47100-1d20-4323-99d1-b8bd233ed3a7 --l2p_dram_limit 10' 00:18:01.599 14:54:46 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:01.599 14:54:46 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:01.599 14:54:46 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:01.599 14:54:46 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:01.599 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:01.599 14:54:46 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6ab47100-1d20-4323-99d1-b8bd233ed3a7 --l2p_dram_limit 10 -c nvc0n1p0 00:18:01.599 [2024-11-17 14:54:47.124687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.599 [2024-11-17 14:54:47.124810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:01.599 [2024-11-17 14:54:47.124830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:01.599 [2024-11-17 14:54:47.124838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.599 [2024-11-17 14:54:47.124888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.599 [2024-11-17 14:54:47.124896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:01.599 [2024-11-17 14:54:47.124904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:01.599 [2024-11-17 14:54:47.124909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.599 [2024-11-17 14:54:47.124942] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:01.599 [2024-11-17 14:54:47.125542] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:01.599 [2024-11-17 14:54:47.125564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.599 [2024-11-17 14:54:47.125571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:01.599 [2024-11-17 14:54:47.125579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:18:01.599 [2024-11-17 14:54:47.125585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.599 [2024-11-17 14:54:47.125635] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c8ea8922-59e6-447b-9520-64154f0daa7c 00:18:01.599 [2024-11-17 14:54:47.126552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.599 [2024-11-17 14:54:47.126581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:01.599 [2024-11-17 14:54:47.126589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:01.599 [2024-11-17 14:54:47.126597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.599 [2024-11-17 14:54:47.131228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.599 [2024-11-17 14:54:47.131328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:01.599 [2024-11-17 14:54:47.131342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.592 ms 00:18:01.599 [2024-11-17 14:54:47.131349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.599 [2024-11-17 14:54:47.131415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.599 [2024-11-17 14:54:47.131425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:01.599 [2024-11-17 14:54:47.131441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:01.599 [2024-11-17 14:54:47.131450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.599 [2024-11-17 14:54:47.131485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.599 [2024-11-17 14:54:47.131494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:01.599 [2024-11-17 14:54:47.131500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:01.599 [2024-11-17 14:54:47.131508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.599 [2024-11-17 14:54:47.131531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:01.599 [2024-11-17 14:54:47.134391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.599 [2024-11-17 14:54:47.134412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:01.599 [2024-11-17 14:54:47.134422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:18:01.599 [2024-11-17 14:54:47.134428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.599 [2024-11-17 14:54:47.134453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.599 [2024-11-17 14:54:47.134459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:01.599 [2024-11-17 14:54:47.134466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:01.599 [2024-11-17 14:54:47.134472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.599 [2024-11-17 14:54:47.134485] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:01.599 [2024-11-17 14:54:47.134588] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:01.599 [2024-11-17 14:54:47.134600] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:01.599 [2024-11-17 14:54:47.134608] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:01.600 [2024-11-17 14:54:47.134617] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:01.600 [2024-11-17 14:54:47.134624] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:01.600 [2024-11-17 14:54:47.134631] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:01.600 [2024-11-17 14:54:47.134636] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:01.600 [2024-11-17 14:54:47.134645] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:01.600 [2024-11-17 14:54:47.134651] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:01.600 [2024-11-17 14:54:47.134658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.600 [2024-11-17 14:54:47.134663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:01.600 [2024-11-17 14:54:47.134670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:18:01.600 [2024-11-17 14:54:47.134680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.600 [2024-11-17 14:54:47.134746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.600 [2024-11-17 14:54:47.134752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:01.600 [2024-11-17 14:54:47.134758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:01.600 [2024-11-17 14:54:47.134765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.600 [2024-11-17 14:54:47.134843] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:01.600 [2024-11-17 14:54:47.134850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:01.600 [2024-11-17 14:54:47.134857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:01.600 [2024-11-17 14:54:47.134863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.600 [2024-11-17 14:54:47.134870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:01.600 [2024-11-17 14:54:47.134876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:01.600 [2024-11-17 14:54:47.134882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:01.600 [2024-11-17 14:54:47.134887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:01.600 [2024-11-17 14:54:47.134893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:01.600 [2024-11-17 14:54:47.134898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:01.600 [2024-11-17 14:54:47.134905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:01.600 [2024-11-17 14:54:47.134909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:01.600 [2024-11-17 14:54:47.134915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:01.600 [2024-11-17 14:54:47.135079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:01.600 [2024-11-17 14:54:47.135099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:01.600 [2024-11-17 14:54:47.135114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.600 [2024-11-17 14:54:47.135132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:01.600 [2024-11-17 14:54:47.135146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:01.600 [2024-11-17 14:54:47.135163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.600 [2024-11-17 14:54:47.135178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:01.600 [2024-11-17 14:54:47.135230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:01.600 [2024-11-17 14:54:47.135248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.600 [2024-11-17 14:54:47.135263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:01.600 [2024-11-17 14:54:47.135277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:01.600 [2024-11-17 14:54:47.135292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.600 [2024-11-17 14:54:47.135306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:01.600 [2024-11-17 14:54:47.135321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:01.600 [2024-11-17 14:54:47.135335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.600 [2024-11-17 14:54:47.135374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:01.600 [2024-11-17 14:54:47.135391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:01.600 [2024-11-17 14:54:47.135429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.600 [2024-11-17 14:54:47.135445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:01.600 [2024-11-17 14:54:47.135478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:01.600 [2024-11-17 14:54:47.135494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:01.600 [2024-11-17 14:54:47.135509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:01.600 [2024-11-17 14:54:47.135531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:01.600 [2024-11-17 14:54:47.135547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:01.600 [2024-11-17 14:54:47.135561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:01.600 [2024-11-17 14:54:47.135577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:01.600 [2024-11-17 14:54:47.135591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.600 [2024-11-17 14:54:47.135635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:01.600 [2024-11-17 14:54:47.135652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:01.600 [2024-11-17 14:54:47.135668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.600 [2024-11-17 14:54:47.135682] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:01.600 [2024-11-17 14:54:47.135697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:01.600 [2024-11-17 14:54:47.135711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:01.600 [2024-11-17 14:54:47.135729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.600 [2024-11-17 14:54:47.135745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:01.600 [2024-11-17 14:54:47.135761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:01.600 [2024-11-17 14:54:47.135801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:01.600 [2024-11-17 14:54:47.135820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:01.600 [2024-11-17 14:54:47.135834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:01.600 [2024-11-17 14:54:47.135849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:01.600 [2024-11-17 14:54:47.135866] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:01.600 [2024-11-17 14:54:47.135892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:01.600 [2024-11-17 14:54:47.135957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:01.600 [2024-11-17 14:54:47.135983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:01.600 [2024-11-17 14:54:47.136006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:01.600 [2024-11-17 14:54:47.136029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:01.600 [2024-11-17 14:54:47.136078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:01.600 [2024-11-17 14:54:47.136104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:01.600 [2024-11-17 14:54:47.136126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:01.600 [2024-11-17 14:54:47.136149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:01.600 [2024-11-17 14:54:47.136171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:01.600 [2024-11-17 14:54:47.136222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:01.600 [2024-11-17 14:54:47.136328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:01.600 [2024-11-17 14:54:47.136353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:01.600 [2024-11-17 14:54:47.136375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:01.600 [2024-11-17 14:54:47.136399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:01.600 [2024-11-17 14:54:47.136421] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:01.600 [2024-11-17 14:54:47.136470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:01.600 [2024-11-17 14:54:47.136494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:01.600 [2024-11-17 14:54:47.136517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:01.600 [2024-11-17 14:54:47.136539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:01.600 [2024-11-17 14:54:47.136561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:01.600 [2024-11-17 14:54:47.136613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.600 [2024-11-17 14:54:47.136631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:01.600 [2024-11-17 14:54:47.136646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.823 ms 00:18:01.600 [2024-11-17 14:54:47.136662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.600 [2024-11-17 14:54:47.136720] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:01.600 [2024-11-17 14:54:47.136787] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:05.827 [2024-11-17 14:54:50.633223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.633481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:05.827 [2024-11-17 14:54:50.633568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3496.487 ms 00:18:05.827 [2024-11-17 14:54:50.633598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.665882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.666120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:05.827 [2024-11-17 14:54:50.666246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.010 ms 00:18:05.827 [2024-11-17 14:54:50.666278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.666446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.666606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:05.827 [2024-11-17 14:54:50.666633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:05.827 [2024-11-17 14:54:50.666658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.702634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.702828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:05.827 [2024-11-17 14:54:50.702901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.908 ms 00:18:05.827 [2024-11-17 14:54:50.702958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.703012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.703041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:05.827 [2024-11-17 14:54:50.703062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:05.827 [2024-11-17 14:54:50.703136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.703770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.703851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:05.827 [2024-11-17 14:54:50.703956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:18:05.827 [2024-11-17 14:54:50.703985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.704119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.704145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:05.827 [2024-11-17 14:54:50.704168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:05.827 [2024-11-17 14:54:50.704193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.721612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.721792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:05.827 [2024-11-17 14:54:50.721991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.386 ms 00:18:05.827 [2024-11-17 14:54:50.722027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.735222] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:05.827 [2024-11-17 14:54:50.739173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.739320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:05.827 [2024-11-17 14:54:50.739384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.019 ms 00:18:05.827 [2024-11-17 14:54:50.739408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.848331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.848590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:05.827 [2024-11-17 14:54:50.848669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.782 ms 00:18:05.827 [2024-11-17 14:54:50.848695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.848956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.849027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:05.827 [2024-11-17 14:54:50.849094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:18:05.827 [2024-11-17 14:54:50.849119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.875620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.875788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:05.827 [2024-11-17 14:54:50.875856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.422 ms 00:18:05.827 [2024-11-17 14:54:50.875868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.900703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.900752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:05.827 [2024-11-17 14:54:50.900768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.724 ms 00:18:05.827 [2024-11-17 14:54:50.900776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.901408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.901428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:05.827 [2024-11-17 14:54:50.901441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:18:05.827 [2024-11-17 14:54:50.901449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:50.983823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:50.983874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:05.827 [2024-11-17 14:54:50.983894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.323 ms 00:18:05.827 [2024-11-17 14:54:50.983903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:51.011359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:51.011409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:05.827 [2024-11-17 14:54:51.011424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.339 ms 00:18:05.827 [2024-11-17 14:54:51.011432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:51.037320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:51.037365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:05.827 [2024-11-17 14:54:51.037381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.830 ms 00:18:05.827 [2024-11-17 14:54:51.037388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:51.064300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:51.064349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:05.827 [2024-11-17 14:54:51.064364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.855 ms 00:18:05.827 [2024-11-17 14:54:51.064371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:51.064427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:51.064437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:05.827 [2024-11-17 14:54:51.064451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:05.827 [2024-11-17 14:54:51.064459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-17 14:54:51.064554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-17 14:54:51.064565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:05.827 [2024-11-17 14:54:51.064578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:05.827 [2024-11-17 14:54:51.064588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.828 [2024-11-17 14:54:51.065791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3940.586 ms, result 0 00:18:05.828 { 00:18:05.828 "name": "ftl0", 00:18:05.828 "uuid": "c8ea8922-59e6-447b-9520-64154f0daa7c" 00:18:05.828 } 00:18:05.828 14:54:51 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:05.828 14:54:51 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:05.828 14:54:51 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:05.828 14:54:51 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:06.089 [2024-11-17 14:54:51.509162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.089 [2024-11-17 14:54:51.509229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:06.089 [2024-11-17 14:54:51.509245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:06.089 [2024-11-17 14:54:51.509263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.089 [2024-11-17 14:54:51.509288] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:06.089 [2024-11-17 14:54:51.512442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.089 [2024-11-17 14:54:51.512635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:06.089 [2024-11-17 14:54:51.512663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.130 ms 00:18:06.089 [2024-11-17 14:54:51.512672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.089 [2024-11-17 14:54:51.512998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.089 [2024-11-17 14:54:51.513011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:06.089 [2024-11-17 14:54:51.513027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:18:06.089 [2024-11-17 14:54:51.513035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.089 [2024-11-17 14:54:51.516293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.089 [2024-11-17 14:54:51.516315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:06.089 [2024-11-17 14:54:51.516326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:18:06.089 [2024-11-17 14:54:51.516334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.089 [2024-11-17 14:54:51.522642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.089 [2024-11-17 14:54:51.522682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:06.089 [2024-11-17 14:54:51.522699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.282 ms 00:18:06.089 [2024-11-17 14:54:51.522707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.089 [2024-11-17 14:54:51.549717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.089 [2024-11-17 14:54:51.549770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:06.089 [2024-11-17 14:54:51.549787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.932 ms 00:18:06.089 [2024-11-17 14:54:51.549795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.089 [2024-11-17 14:54:51.567512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.090 [2024-11-17 14:54:51.567567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:06.090 [2024-11-17 14:54:51.567582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.655 ms 00:18:06.090 [2024-11-17 14:54:51.567590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.090 [2024-11-17 14:54:51.567766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.090 [2024-11-17 14:54:51.567778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:06.090 [2024-11-17 14:54:51.567790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:18:06.090 [2024-11-17 14:54:51.567799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.090 [2024-11-17 14:54:51.593098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.090 [2024-11-17 14:54:51.593288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:06.090 [2024-11-17 14:54:51.593315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.275 ms 00:18:06.090 [2024-11-17 14:54:51.593324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.090 [2024-11-17 14:54:51.618728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.090 [2024-11-17 14:54:51.618772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:06.090 [2024-11-17 14:54:51.618786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.318 ms 00:18:06.090 [2024-11-17 14:54:51.618794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.351 [2024-11-17 14:54:51.643791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.351 [2024-11-17 14:54:51.643839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:06.351 [2024-11-17 14:54:51.643853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.939 ms 00:18:06.351 [2024-11-17 14:54:51.643860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.351 [2024-11-17 14:54:51.668371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.351 [2024-11-17 14:54:51.668416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:06.351 [2024-11-17 14:54:51.668430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.357 ms 00:18:06.351 [2024-11-17 14:54:51.668437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.351 [2024-11-17 14:54:51.668488] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:06.351 [2024-11-17 14:54:51.668503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:06.351 [2024-11-17 14:54:51.668755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.668998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:06.352 [2024-11-17 14:54:51.669440] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:06.352 [2024-11-17 14:54:51.669453] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c8ea8922-59e6-447b-9520-64154f0daa7c 00:18:06.352 [2024-11-17 14:54:51.669460] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:06.352 [2024-11-17 14:54:51.669471] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:06.352 [2024-11-17 14:54:51.669478] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:06.352 [2024-11-17 14:54:51.669492] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:06.352 [2024-11-17 14:54:51.669499] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:06.352 [2024-11-17 14:54:51.669509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:06.352 [2024-11-17 14:54:51.669516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:06.352 [2024-11-17 14:54:51.669524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:06.352 [2024-11-17 14:54:51.669531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:06.352 [2024-11-17 14:54:51.669541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.352 [2024-11-17 14:54:51.669549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:06.352 [2024-11-17 14:54:51.669560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:18:06.352 [2024-11-17 14:54:51.669567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.683227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.352 [2024-11-17 14:54:51.683416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:06.352 [2024-11-17 14:54:51.683442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.611 ms 00:18:06.352 [2024-11-17 14:54:51.683450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.683896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.352 [2024-11-17 14:54:51.683910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:06.352 [2024-11-17 14:54:51.683951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:18:06.352 [2024-11-17 14:54:51.683963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.730603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.352 [2024-11-17 14:54:51.730778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.352 [2024-11-17 14:54:51.730804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.352 [2024-11-17 14:54:51.730814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.730898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.352 [2024-11-17 14:54:51.730908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.352 [2024-11-17 14:54:51.730939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.352 [2024-11-17 14:54:51.730953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.731052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.352 [2024-11-17 14:54:51.731064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.352 [2024-11-17 14:54:51.731075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.352 [2024-11-17 14:54:51.731084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.731109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.352 [2024-11-17 14:54:51.731119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.352 [2024-11-17 14:54:51.731130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.352 [2024-11-17 14:54:51.731139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.815367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.352 [2024-11-17 14:54:51.815424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.352 [2024-11-17 14:54:51.815440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.352 [2024-11-17 14:54:51.815448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.884184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.352 [2024-11-17 14:54:51.884237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.352 [2024-11-17 14:54:51.884252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.352 [2024-11-17 14:54:51.884264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.884375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.352 [2024-11-17 14:54:51.884386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.352 [2024-11-17 14:54:51.884397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.352 [2024-11-17 14:54:51.884404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.884458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.352 [2024-11-17 14:54:51.884468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.352 [2024-11-17 14:54:51.884479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.352 [2024-11-17 14:54:51.884487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.884592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.352 [2024-11-17 14:54:51.884603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.352 [2024-11-17 14:54:51.884614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.352 [2024-11-17 14:54:51.884622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.884658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.352 [2024-11-17 14:54:51.884668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:06.352 [2024-11-17 14:54:51.884678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.352 [2024-11-17 14:54:51.884687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.884733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.352 [2024-11-17 14:54:51.884745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.352 [2024-11-17 14:54:51.884755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.352 [2024-11-17 14:54:51.884764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.884814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.352 [2024-11-17 14:54:51.884824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.352 [2024-11-17 14:54:51.884835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.352 [2024-11-17 14:54:51.884843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.352 [2024-11-17 14:54:51.885029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.787 ms, result 0 00:18:06.352 true 00:18:06.614 14:54:51 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74365 00:18:06.614 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74365 ']' 00:18:06.614 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74365 00:18:06.614 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:18:06.614 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.614 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74365 00:18:06.614 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.614 killing process with pid 74365 00:18:06.614 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.614 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74365' 00:18:06.614 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 74365 00:18:06.614 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 74365 00:18:13.210 14:54:58 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:17.423 262144+0 records in 00:18:17.423 262144+0 records out 00:18:17.423 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.42443 s, 243 MB/s 00:18:17.423 14:55:02 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:19.340 14:55:04 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:19.340 [2024-11-17 14:55:04.758286] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:19.340 [2024-11-17 14:55:04.758537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74601 ] 00:18:19.601 [2024-11-17 14:55:04.918157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.601 [2024-11-17 14:55:05.032382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.862 [2024-11-17 14:55:05.321309] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.862 [2024-11-17 14:55:05.321384] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:20.125 [2024-11-17 14:55:05.481591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.125 [2024-11-17 14:55:05.481655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:20.125 [2024-11-17 14:55:05.481676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:20.125 [2024-11-17 14:55:05.481685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.125 [2024-11-17 14:55:05.481741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.125 [2024-11-17 14:55:05.481752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:20.125 [2024-11-17 14:55:05.481764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:20.125 [2024-11-17 14:55:05.481772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.125 [2024-11-17 14:55:05.481793] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:20.125 [2024-11-17 14:55:05.482852] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:20.125 [2024-11-17 14:55:05.482914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.125 [2024-11-17 14:55:05.482946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:20.125 [2024-11-17 14:55:05.482957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:18:20.125 [2024-11-17 14:55:05.482965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.125 [2024-11-17 14:55:05.484677] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:20.125 [2024-11-17 14:55:05.498757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.125 [2024-11-17 14:55:05.498981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:20.125 [2024-11-17 14:55:05.499005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.083 ms 00:18:20.125 [2024-11-17 14:55:05.499015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.125 [2024-11-17 14:55:05.499090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.125 [2024-11-17 14:55:05.499100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:20.125 [2024-11-17 14:55:05.499110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:20.125 [2024-11-17 14:55:05.499117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.125 [2024-11-17 14:55:05.507146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.125 [2024-11-17 14:55:05.507187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:20.125 [2024-11-17 14:55:05.507197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.946 ms 00:18:20.125 [2024-11-17 14:55:05.507205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.125 [2024-11-17 14:55:05.507290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.125 [2024-11-17 14:55:05.507300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:20.125 [2024-11-17 14:55:05.507309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:20.125 [2024-11-17 14:55:05.507318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.125 [2024-11-17 14:55:05.507363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.125 [2024-11-17 14:55:05.507373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:20.125 [2024-11-17 14:55:05.507382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:20.125 [2024-11-17 14:55:05.507390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.125 [2024-11-17 14:55:05.507415] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:20.125 [2024-11-17 14:55:05.511468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.125 [2024-11-17 14:55:05.511508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:20.125 [2024-11-17 14:55:05.511519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.060 ms 00:18:20.125 [2024-11-17 14:55:05.511552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.125 [2024-11-17 14:55:05.511588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.125 [2024-11-17 14:55:05.511596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:20.125 [2024-11-17 14:55:05.511606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:20.125 [2024-11-17 14:55:05.511614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.125 [2024-11-17 14:55:05.511666] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:20.125 [2024-11-17 14:55:05.511689] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:20.125 [2024-11-17 14:55:05.511726] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:20.125 [2024-11-17 14:55:05.511746] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:20.125 [2024-11-17 14:55:05.511852] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:20.125 [2024-11-17 14:55:05.511863] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:20.125 [2024-11-17 14:55:05.511874] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:20.125 [2024-11-17 14:55:05.511885] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:20.125 [2024-11-17 14:55:05.511894] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:20.125 [2024-11-17 14:55:05.511903] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:20.125 [2024-11-17 14:55:05.511910] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:20.125 [2024-11-17 14:55:05.511941] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:20.125 [2024-11-17 14:55:05.511950] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:20.125 [2024-11-17 14:55:05.511962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.125 [2024-11-17 14:55:05.511971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:20.125 [2024-11-17 14:55:05.511980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:18:20.125 [2024-11-17 14:55:05.511987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.125 [2024-11-17 14:55:05.512071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.125 [2024-11-17 14:55:05.512080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:20.125 [2024-11-17 14:55:05.512088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:20.125 [2024-11-17 14:55:05.512095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.125 [2024-11-17 14:55:05.512200] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:20.125 [2024-11-17 14:55:05.512214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:20.125 [2024-11-17 14:55:05.512223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:20.125 [2024-11-17 14:55:05.512231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.125 [2024-11-17 14:55:05.512240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:20.125 [2024-11-17 14:55:05.512247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:20.125 [2024-11-17 14:55:05.512254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:20.125 [2024-11-17 14:55:05.512261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:20.125 [2024-11-17 14:55:05.512268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:20.125 [2024-11-17 14:55:05.512275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:20.125 [2024-11-17 14:55:05.512283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:20.125 [2024-11-17 14:55:05.512289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:20.125 [2024-11-17 14:55:05.512296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:20.125 [2024-11-17 14:55:05.512303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:20.125 [2024-11-17 14:55:05.512311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:20.126 [2024-11-17 14:55:05.512325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.126 [2024-11-17 14:55:05.512332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:20.126 [2024-11-17 14:55:05.512339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:20.126 [2024-11-17 14:55:05.512345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.126 [2024-11-17 14:55:05.512352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:20.126 [2024-11-17 14:55:05.512359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:20.126 [2024-11-17 14:55:05.512366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:20.126 [2024-11-17 14:55:05.512372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:20.126 [2024-11-17 14:55:05.512379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:20.126 [2024-11-17 14:55:05.512385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:20.126 [2024-11-17 14:55:05.512392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:20.126 [2024-11-17 14:55:05.512399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:20.126 [2024-11-17 14:55:05.512406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:20.126 [2024-11-17 14:55:05.512413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:20.126 [2024-11-17 14:55:05.512419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:20.126 [2024-11-17 14:55:05.512426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:20.126 [2024-11-17 14:55:05.512432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:20.126 [2024-11-17 14:55:05.512439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:20.126 [2024-11-17 14:55:05.512445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:20.126 [2024-11-17 14:55:05.512451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:20.126 [2024-11-17 14:55:05.512460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:20.126 [2024-11-17 14:55:05.512466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:20.126 [2024-11-17 14:55:05.512473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:20.126 [2024-11-17 14:55:05.512479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:20.126 [2024-11-17 14:55:05.512485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.126 [2024-11-17 14:55:05.512492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:20.126 [2024-11-17 14:55:05.512499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:20.126 [2024-11-17 14:55:05.512506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.126 [2024-11-17 14:55:05.512512] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:20.126 [2024-11-17 14:55:05.512520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:20.126 [2024-11-17 14:55:05.512528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:20.126 [2024-11-17 14:55:05.512536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.126 [2024-11-17 14:55:05.512545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:20.126 [2024-11-17 14:55:05.512552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:20.126 [2024-11-17 14:55:05.512560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:20.126 [2024-11-17 14:55:05.512567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:20.126 [2024-11-17 14:55:05.512573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:20.126 [2024-11-17 14:55:05.512580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:20.126 [2024-11-17 14:55:05.512589] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:20.126 [2024-11-17 14:55:05.512598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:20.126 [2024-11-17 14:55:05.512607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:20.126 [2024-11-17 14:55:05.512614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:20.126 [2024-11-17 14:55:05.512622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:20.126 [2024-11-17 14:55:05.512629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:20.126 [2024-11-17 14:55:05.512636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:20.126 [2024-11-17 14:55:05.512643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:20.126 [2024-11-17 14:55:05.512650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:20.126 [2024-11-17 14:55:05.512658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:20.126 [2024-11-17 14:55:05.512665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:20.126 [2024-11-17 14:55:05.512673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:20.126 [2024-11-17 14:55:05.512680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:20.126 [2024-11-17 14:55:05.512687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:20.126 [2024-11-17 14:55:05.512694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:20.126 [2024-11-17 14:55:05.512702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:20.126 [2024-11-17 14:55:05.512709] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:20.126 [2024-11-17 14:55:05.512720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:20.126 [2024-11-17 14:55:05.512728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:20.126 [2024-11-17 14:55:05.512736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:20.126 [2024-11-17 14:55:05.512744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:20.126 [2024-11-17 14:55:05.512751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:20.126 [2024-11-17 14:55:05.512759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.126 [2024-11-17 14:55:05.512767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:20.126 [2024-11-17 14:55:05.512775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:18:20.126 [2024-11-17 14:55:05.512785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.126 [2024-11-17 14:55:05.544367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.126 [2024-11-17 14:55:05.544417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:20.126 [2024-11-17 14:55:05.544428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.537 ms 00:18:20.126 [2024-11-17 14:55:05.544436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.126 [2024-11-17 14:55:05.544531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.126 [2024-11-17 14:55:05.544540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:20.126 [2024-11-17 14:55:05.544549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:20.126 [2024-11-17 14:55:05.544556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.126 [2024-11-17 14:55:05.591472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.126 [2024-11-17 14:55:05.591535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:20.126 [2024-11-17 14:55:05.591549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.859 ms 00:18:20.126 [2024-11-17 14:55:05.591557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.126 [2024-11-17 14:55:05.591607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.126 [2024-11-17 14:55:05.591617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:20.126 [2024-11-17 14:55:05.591627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:20.126 [2024-11-17 14:55:05.591638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.126 [2024-11-17 14:55:05.592270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.126 [2024-11-17 14:55:05.592301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:20.126 [2024-11-17 14:55:05.592312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:18:20.126 [2024-11-17 14:55:05.592321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.126 [2024-11-17 14:55:05.592475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.126 [2024-11-17 14:55:05.592494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:20.126 [2024-11-17 14:55:05.592503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:20.126 [2024-11-17 14:55:05.592517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.126 [2024-11-17 14:55:05.608176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.126 [2024-11-17 14:55:05.608218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:20.126 [2024-11-17 14:55:05.608234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.639 ms 00:18:20.126 [2024-11-17 14:55:05.608243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.126 [2024-11-17 14:55:05.622612] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:20.126 [2024-11-17 14:55:05.622663] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:20.126 [2024-11-17 14:55:05.622678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.126 [2024-11-17 14:55:05.622687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:20.126 [2024-11-17 14:55:05.622696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.328 ms 00:18:20.126 [2024-11-17 14:55:05.622703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.126 [2024-11-17 14:55:05.648355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.127 [2024-11-17 14:55:05.648404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:20.127 [2024-11-17 14:55:05.648423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.599 ms 00:18:20.127 [2024-11-17 14:55:05.648431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.127 [2024-11-17 14:55:05.661225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.127 [2024-11-17 14:55:05.661280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:20.127 [2024-11-17 14:55:05.661292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.740 ms 00:18:20.127 [2024-11-17 14:55:05.661299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.388 [2024-11-17 14:55:05.674006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.388 [2024-11-17 14:55:05.674051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:20.388 [2024-11-17 14:55:05.674063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.658 ms 00:18:20.388 [2024-11-17 14:55:05.674070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.388 [2024-11-17 14:55:05.674718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.388 [2024-11-17 14:55:05.674741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:20.388 [2024-11-17 14:55:05.674752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:18:20.388 [2024-11-17 14:55:05.674760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.388 [2024-11-17 14:55:05.740096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.388 [2024-11-17 14:55:05.740339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:20.388 [2024-11-17 14:55:05.740364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.313 ms 00:18:20.388 [2024-11-17 14:55:05.740382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.388 [2024-11-17 14:55:05.751771] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:20.388 [2024-11-17 14:55:05.754725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.388 [2024-11-17 14:55:05.754771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:20.388 [2024-11-17 14:55:05.754784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.010 ms 00:18:20.388 [2024-11-17 14:55:05.754794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.388 [2024-11-17 14:55:05.754894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.388 [2024-11-17 14:55:05.754907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:20.388 [2024-11-17 14:55:05.754935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:20.388 [2024-11-17 14:55:05.754944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.388 [2024-11-17 14:55:05.755024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.388 [2024-11-17 14:55:05.755036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:20.388 [2024-11-17 14:55:05.755045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:20.388 [2024-11-17 14:55:05.755054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.388 [2024-11-17 14:55:05.755079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.388 [2024-11-17 14:55:05.755089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:20.388 [2024-11-17 14:55:05.755098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:20.388 [2024-11-17 14:55:05.755107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.388 [2024-11-17 14:55:05.755139] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:20.388 [2024-11-17 14:55:05.755151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.388 [2024-11-17 14:55:05.755162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:20.388 [2024-11-17 14:55:05.755171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:20.388 [2024-11-17 14:55:05.755179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.388 [2024-11-17 14:55:05.780778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.388 [2024-11-17 14:55:05.780968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:20.388 [2024-11-17 14:55:05.781033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.576 ms 00:18:20.388 [2024-11-17 14:55:05.781057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.388 [2024-11-17 14:55:05.781433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.388 [2024-11-17 14:55:05.781517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:20.388 [2024-11-17 14:55:05.781621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:20.388 [2024-11-17 14:55:05.781646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.388 [2024-11-17 14:55:05.783080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.981 ms, result 0 00:18:21.332  [2024-11-17T14:55:07.819Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-17T14:55:09.206Z] Copying: 33/1024 [MB] (17 MBps) [2024-11-17T14:55:10.149Z] Copying: 56/1024 [MB] (22 MBps) [2024-11-17T14:55:11.090Z] Copying: 75/1024 [MB] (18 MBps) [2024-11-17T14:55:12.077Z] Copying: 98/1024 [MB] (23 MBps) [2024-11-17T14:55:13.062Z] Copying: 117/1024 [MB] (18 MBps) [2024-11-17T14:55:14.004Z] Copying: 142/1024 [MB] (25 MBps) [2024-11-17T14:55:14.947Z] Copying: 157/1024 [MB] (14 MBps) [2024-11-17T14:55:15.890Z] Copying: 174/1024 [MB] (17 MBps) [2024-11-17T14:55:16.834Z] Copying: 190/1024 [MB] (15 MBps) [2024-11-17T14:55:18.224Z] Copying: 201/1024 [MB] (10 MBps) [2024-11-17T14:55:19.168Z] Copying: 232/1024 [MB] (31 MBps) [2024-11-17T14:55:20.111Z] Copying: 286/1024 [MB] (54 MBps) [2024-11-17T14:55:21.053Z] Copying: 316/1024 [MB] (29 MBps) [2024-11-17T14:55:21.997Z] Copying: 335/1024 [MB] (18 MBps) [2024-11-17T14:55:22.941Z] Copying: 352/1024 [MB] (17 MBps) [2024-11-17T14:55:23.884Z] Copying: 368/1024 [MB] (15 MBps) [2024-11-17T14:55:24.828Z] Copying: 387/1024 [MB] (19 MBps) [2024-11-17T14:55:26.207Z] Copying: 412/1024 [MB] (24 MBps) [2024-11-17T14:55:27.148Z] Copying: 429/1024 [MB] (16 MBps) [2024-11-17T14:55:28.086Z] Copying: 446/1024 [MB] (16 MBps) [2024-11-17T14:55:29.026Z] Copying: 458/1024 [MB] (12 MBps) [2024-11-17T14:55:29.965Z] Copying: 470/1024 [MB] (12 MBps) [2024-11-17T14:55:30.906Z] Copying: 481/1024 [MB] (10 MBps) [2024-11-17T14:55:31.845Z] Copying: 497/1024 [MB] (16 MBps) [2024-11-17T14:55:33.226Z] Copying: 517/1024 [MB] (19 MBps) [2024-11-17T14:55:33.798Z] Copying: 529/1024 [MB] (12 MBps) [2024-11-17T14:55:35.183Z] Copying: 542/1024 [MB] (12 MBps) [2024-11-17T14:55:36.122Z] Copying: 556/1024 [MB] (14 MBps) [2024-11-17T14:55:37.062Z] Copying: 575/1024 [MB] (19 MBps) [2024-11-17T14:55:38.001Z] Copying: 586/1024 [MB] (10 MBps) [2024-11-17T14:55:38.945Z] Copying: 597/1024 [MB] (10 MBps) [2024-11-17T14:55:39.885Z] Copying: 607/1024 [MB] (10 MBps) [2024-11-17T14:55:40.832Z] Copying: 617/1024 [MB] (10 MBps) [2024-11-17T14:55:41.812Z] Copying: 642528/1048576 [kB] (10176 kBps) [2024-11-17T14:55:43.199Z] Copying: 655/1024 [MB] (28 MBps) [2024-11-17T14:55:44.144Z] Copying: 710/1024 [MB] (54 MBps) [2024-11-17T14:55:45.088Z] Copying: 747/1024 [MB] (37 MBps) [2024-11-17T14:55:46.033Z] Copying: 759/1024 [MB] (11 MBps) [2024-11-17T14:55:46.977Z] Copying: 769/1024 [MB] (10 MBps) [2024-11-17T14:55:47.931Z] Copying: 783/1024 [MB] (14 MBps) [2024-11-17T14:55:48.877Z] Copying: 796/1024 [MB] (12 MBps) [2024-11-17T14:55:49.821Z] Copying: 807/1024 [MB] (10 MBps) [2024-11-17T14:55:51.208Z] Copying: 822/1024 [MB] (14 MBps) [2024-11-17T14:55:52.153Z] Copying: 832/1024 [MB] (10 MBps) [2024-11-17T14:55:53.098Z] Copying: 844/1024 [MB] (12 MBps) [2024-11-17T14:55:54.042Z] Copying: 855/1024 [MB] (10 MBps) [2024-11-17T14:55:54.985Z] Copying: 874/1024 [MB] (19 MBps) [2024-11-17T14:55:55.927Z] Copying: 896/1024 [MB] (21 MBps) [2024-11-17T14:55:56.872Z] Copying: 916/1024 [MB] (19 MBps) [2024-11-17T14:55:57.816Z] Copying: 935/1024 [MB] (19 MBps) [2024-11-17T14:55:59.202Z] Copying: 947/1024 [MB] (11 MBps) [2024-11-17T14:56:00.144Z] Copying: 978/1024 [MB] (30 MBps) [2024-11-17T14:56:01.089Z] Copying: 999/1024 [MB] (20 MBps) [2024-11-17T14:56:01.662Z] Copying: 1011/1024 [MB] (12 MBps) [2024-11-17T14:56:01.662Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-17 14:56:01.593010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.119 [2024-11-17 14:56:01.593072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:16.119 [2024-11-17 14:56:01.593088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:16.119 [2024-11-17 14:56:01.593098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.119 [2024-11-17 14:56:01.593120] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:16.119 [2024-11-17 14:56:01.596267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.119 [2024-11-17 14:56:01.596314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:16.119 [2024-11-17 14:56:01.596326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.130 ms 00:19:16.119 [2024-11-17 14:56:01.596335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.119 [2024-11-17 14:56:01.599039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.119 [2024-11-17 14:56:01.599218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:16.119 [2024-11-17 14:56:01.599240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:19:16.119 [2024-11-17 14:56:01.599248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.119 [2024-11-17 14:56:01.618907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.119 [2024-11-17 14:56:01.618970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:16.119 [2024-11-17 14:56:01.618982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.636 ms 00:19:16.119 [2024-11-17 14:56:01.618991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.119 [2024-11-17 14:56:01.625215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.119 [2024-11-17 14:56:01.625276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:16.119 [2024-11-17 14:56:01.625288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.181 ms 00:19:16.119 [2024-11-17 14:56:01.625296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.119 [2024-11-17 14:56:01.652687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.119 [2024-11-17 14:56:01.652738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:16.119 [2024-11-17 14:56:01.652752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.330 ms 00:19:16.119 [2024-11-17 14:56:01.652759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.382 [2024-11-17 14:56:01.668908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.382 [2024-11-17 14:56:01.668991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:16.382 [2024-11-17 14:56:01.669005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.098 ms 00:19:16.382 [2024-11-17 14:56:01.669014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.382 [2024-11-17 14:56:01.669161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.382 [2024-11-17 14:56:01.669173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:16.382 [2024-11-17 14:56:01.669191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:16.382 [2024-11-17 14:56:01.669199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.382 [2024-11-17 14:56:01.695505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.382 [2024-11-17 14:56:01.695563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:16.382 [2024-11-17 14:56:01.695577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.290 ms 00:19:16.382 [2024-11-17 14:56:01.695585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.382 [2024-11-17 14:56:01.721422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.382 [2024-11-17 14:56:01.721472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:16.382 [2024-11-17 14:56:01.721498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.785 ms 00:19:16.382 [2024-11-17 14:56:01.721504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.382 [2024-11-17 14:56:01.746862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.382 [2024-11-17 14:56:01.746913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:16.382 [2024-11-17 14:56:01.746942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.308 ms 00:19:16.382 [2024-11-17 14:56:01.746950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.382 [2024-11-17 14:56:01.772732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.382 [2024-11-17 14:56:01.772794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:16.382 [2024-11-17 14:56:01.772806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.670 ms 00:19:16.382 [2024-11-17 14:56:01.772814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.382 [2024-11-17 14:56:01.772862] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:16.382 [2024-11-17 14:56:01.772877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:16.382 [2024-11-17 14:56:01.772888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.772896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.772904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.772911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.772940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.772948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.772956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.772964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.772972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.772980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.772988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.772996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:16.383 [2024-11-17 14:56:01.773600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:16.384 [2024-11-17 14:56:01.773607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:16.384 [2024-11-17 14:56:01.773615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:16.384 [2024-11-17 14:56:01.773623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:16.384 [2024-11-17 14:56:01.773631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:16.384 [2024-11-17 14:56:01.773639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:16.384 [2024-11-17 14:56:01.773647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:16.384 [2024-11-17 14:56:01.773654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:16.384 [2024-11-17 14:56:01.773662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:16.384 [2024-11-17 14:56:01.773669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:16.384 [2024-11-17 14:56:01.773685] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:16.384 [2024-11-17 14:56:01.773701] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c8ea8922-59e6-447b-9520-64154f0daa7c 00:19:16.384 [2024-11-17 14:56:01.773710] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:16.384 [2024-11-17 14:56:01.773723] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:16.384 [2024-11-17 14:56:01.773731] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:16.384 [2024-11-17 14:56:01.773739] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:16.384 [2024-11-17 14:56:01.773746] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:16.384 [2024-11-17 14:56:01.773754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:16.384 [2024-11-17 14:56:01.773762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:16.384 [2024-11-17 14:56:01.773777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:16.384 [2024-11-17 14:56:01.773783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:16.384 [2024-11-17 14:56:01.773790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.384 [2024-11-17 14:56:01.773798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:16.384 [2024-11-17 14:56:01.773807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:19:16.384 [2024-11-17 14:56:01.773814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.384 [2024-11-17 14:56:01.787402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.384 [2024-11-17 14:56:01.787450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:16.384 [2024-11-17 14:56:01.787463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.553 ms 00:19:16.384 [2024-11-17 14:56:01.787471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.384 [2024-11-17 14:56:01.787885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.384 [2024-11-17 14:56:01.787895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:16.384 [2024-11-17 14:56:01.787904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:19:16.384 [2024-11-17 14:56:01.787912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.384 [2024-11-17 14:56:01.824896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.384 [2024-11-17 14:56:01.824963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:16.384 [2024-11-17 14:56:01.824975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.384 [2024-11-17 14:56:01.824983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.384 [2024-11-17 14:56:01.825048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.384 [2024-11-17 14:56:01.825057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:16.384 [2024-11-17 14:56:01.825066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.384 [2024-11-17 14:56:01.825075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.384 [2024-11-17 14:56:01.825170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.384 [2024-11-17 14:56:01.825182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:16.384 [2024-11-17 14:56:01.825190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.384 [2024-11-17 14:56:01.825198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.384 [2024-11-17 14:56:01.825215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.384 [2024-11-17 14:56:01.825223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:16.384 [2024-11-17 14:56:01.825231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.384 [2024-11-17 14:56:01.825239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.384 [2024-11-17 14:56:01.910961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.384 [2024-11-17 14:56:01.911022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:16.384 [2024-11-17 14:56:01.911038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.384 [2024-11-17 14:56:01.911047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.645 [2024-11-17 14:56:01.981462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.646 [2024-11-17 14:56:01.981524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:16.646 [2024-11-17 14:56:01.981538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.646 [2024-11-17 14:56:01.981547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.646 [2024-11-17 14:56:01.981618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.646 [2024-11-17 14:56:01.981635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:16.646 [2024-11-17 14:56:01.981644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.646 [2024-11-17 14:56:01.981653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.646 [2024-11-17 14:56:01.981711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.646 [2024-11-17 14:56:01.981722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:16.646 [2024-11-17 14:56:01.981731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.646 [2024-11-17 14:56:01.981739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.646 [2024-11-17 14:56:01.981843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.646 [2024-11-17 14:56:01.981858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:16.646 [2024-11-17 14:56:01.981866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.646 [2024-11-17 14:56:01.981875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.646 [2024-11-17 14:56:01.981907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.646 [2024-11-17 14:56:01.981949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:16.646 [2024-11-17 14:56:01.981959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.646 [2024-11-17 14:56:01.981968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.646 [2024-11-17 14:56:01.982010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.646 [2024-11-17 14:56:01.982020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:16.646 [2024-11-17 14:56:01.982031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.646 [2024-11-17 14:56:01.982040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.646 [2024-11-17 14:56:01.982099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.646 [2024-11-17 14:56:01.982111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:16.646 [2024-11-17 14:56:01.982120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.646 [2024-11-17 14:56:01.982129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.646 [2024-11-17 14:56:01.982269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.220 ms, result 0 00:19:17.218 00:19:17.218 00:19:17.480 14:56:02 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:17.480 [2024-11-17 14:56:02.834192] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:17.480 [2024-11-17 14:56:02.834334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75195 ] 00:19:17.480 [2024-11-17 14:56:02.998178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.741 [2024-11-17 14:56:03.124339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.002 [2024-11-17 14:56:03.413202] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:18.002 [2024-11-17 14:56:03.413284] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:18.266 [2024-11-17 14:56:03.574719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.266 [2024-11-17 14:56:03.575013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:18.266 [2024-11-17 14:56:03.575047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:18.266 [2024-11-17 14:56:03.575057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.266 [2024-11-17 14:56:03.575133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.266 [2024-11-17 14:56:03.575144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:18.266 [2024-11-17 14:56:03.575156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:18.266 [2024-11-17 14:56:03.575164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.266 [2024-11-17 14:56:03.575187] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:18.266 [2024-11-17 14:56:03.575962] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:18.266 [2024-11-17 14:56:03.575983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.266 [2024-11-17 14:56:03.575993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:18.266 [2024-11-17 14:56:03.576002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:19:18.266 [2024-11-17 14:56:03.576010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.266 [2024-11-17 14:56:03.577721] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:18.266 [2024-11-17 14:56:03.592245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.266 [2024-11-17 14:56:03.592299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:18.266 [2024-11-17 14:56:03.592313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.527 ms 00:19:18.266 [2024-11-17 14:56:03.592322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.266 [2024-11-17 14:56:03.592413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.266 [2024-11-17 14:56:03.592423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:18.266 [2024-11-17 14:56:03.592432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:18.266 [2024-11-17 14:56:03.592440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.266 [2024-11-17 14:56:03.600884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.266 [2024-11-17 14:56:03.601119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:18.266 [2024-11-17 14:56:03.601139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.362 ms 00:19:18.266 [2024-11-17 14:56:03.601148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.266 [2024-11-17 14:56:03.601241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.266 [2024-11-17 14:56:03.601250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:18.266 [2024-11-17 14:56:03.601259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:18.266 [2024-11-17 14:56:03.601268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.266 [2024-11-17 14:56:03.601315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.266 [2024-11-17 14:56:03.601326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:18.266 [2024-11-17 14:56:03.601335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:18.266 [2024-11-17 14:56:03.601342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.266 [2024-11-17 14:56:03.601366] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:18.266 [2024-11-17 14:56:03.605406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.266 [2024-11-17 14:56:03.605446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:18.266 [2024-11-17 14:56:03.605457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.046 ms 00:19:18.266 [2024-11-17 14:56:03.605468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.266 [2024-11-17 14:56:03.605505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.266 [2024-11-17 14:56:03.605514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:18.266 [2024-11-17 14:56:03.605523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:18.266 [2024-11-17 14:56:03.605531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.266 [2024-11-17 14:56:03.605586] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:18.266 [2024-11-17 14:56:03.605610] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:18.266 [2024-11-17 14:56:03.605648] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:18.266 [2024-11-17 14:56:03.605667] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:18.266 [2024-11-17 14:56:03.605773] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:18.266 [2024-11-17 14:56:03.605785] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:18.266 [2024-11-17 14:56:03.605797] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:18.266 [2024-11-17 14:56:03.605808] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:18.266 [2024-11-17 14:56:03.605816] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:18.266 [2024-11-17 14:56:03.605825] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:18.266 [2024-11-17 14:56:03.605833] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:18.266 [2024-11-17 14:56:03.605842] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:18.266 [2024-11-17 14:56:03.605849] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:18.266 [2024-11-17 14:56:03.605861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.266 [2024-11-17 14:56:03.605869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:18.266 [2024-11-17 14:56:03.605877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:19:18.266 [2024-11-17 14:56:03.605885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.266 [2024-11-17 14:56:03.605992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.266 [2024-11-17 14:56:03.606002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:18.267 [2024-11-17 14:56:03.606011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:18.267 [2024-11-17 14:56:03.606018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.267 [2024-11-17 14:56:03.606124] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:18.267 [2024-11-17 14:56:03.606137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:18.267 [2024-11-17 14:56:03.606147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:18.267 [2024-11-17 14:56:03.606154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:18.267 [2024-11-17 14:56:03.606169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:18.267 [2024-11-17 14:56:03.606185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:18.267 [2024-11-17 14:56:03.606193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:18.267 [2024-11-17 14:56:03.606207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:18.267 [2024-11-17 14:56:03.606214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:18.267 [2024-11-17 14:56:03.606221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:18.267 [2024-11-17 14:56:03.606228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:18.267 [2024-11-17 14:56:03.606235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:18.267 [2024-11-17 14:56:03.606252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:18.267 [2024-11-17 14:56:03.606267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:18.267 [2024-11-17 14:56:03.606274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:18.267 [2024-11-17 14:56:03.606288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.267 [2024-11-17 14:56:03.606302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:18.267 [2024-11-17 14:56:03.606309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.267 [2024-11-17 14:56:03.606323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:18.267 [2024-11-17 14:56:03.606330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.267 [2024-11-17 14:56:03.606343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:18.267 [2024-11-17 14:56:03.606350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.267 [2024-11-17 14:56:03.606363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:18.267 [2024-11-17 14:56:03.606370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:18.267 [2024-11-17 14:56:03.606383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:18.267 [2024-11-17 14:56:03.606390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:18.267 [2024-11-17 14:56:03.606396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:18.267 [2024-11-17 14:56:03.606402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:18.267 [2024-11-17 14:56:03.606408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:18.267 [2024-11-17 14:56:03.606414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:18.267 [2024-11-17 14:56:03.606428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:18.267 [2024-11-17 14:56:03.606435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606441] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:18.267 [2024-11-17 14:56:03.606449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:18.267 [2024-11-17 14:56:03.606456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:18.267 [2024-11-17 14:56:03.606463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.267 [2024-11-17 14:56:03.606474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:18.267 [2024-11-17 14:56:03.606481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:18.267 [2024-11-17 14:56:03.606487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:18.267 [2024-11-17 14:56:03.606494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:18.267 [2024-11-17 14:56:03.606501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:18.267 [2024-11-17 14:56:03.606507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:18.267 [2024-11-17 14:56:03.606515] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:18.267 [2024-11-17 14:56:03.606525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:18.267 [2024-11-17 14:56:03.606534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:18.267 [2024-11-17 14:56:03.606541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:18.267 [2024-11-17 14:56:03.606548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:18.267 [2024-11-17 14:56:03.606555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:18.267 [2024-11-17 14:56:03.606564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:18.267 [2024-11-17 14:56:03.606571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:18.267 [2024-11-17 14:56:03.606578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:18.267 [2024-11-17 14:56:03.606584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:18.267 [2024-11-17 14:56:03.606591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:18.267 [2024-11-17 14:56:03.606600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:18.267 [2024-11-17 14:56:03.606607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:18.267 [2024-11-17 14:56:03.606614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:18.267 [2024-11-17 14:56:03.606621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:18.267 [2024-11-17 14:56:03.606629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:18.267 [2024-11-17 14:56:03.606637] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:18.267 [2024-11-17 14:56:03.606647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:18.267 [2024-11-17 14:56:03.606655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:18.267 [2024-11-17 14:56:03.606663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:18.267 [2024-11-17 14:56:03.606671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:18.267 [2024-11-17 14:56:03.606678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:18.267 [2024-11-17 14:56:03.606685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.267 [2024-11-17 14:56:03.606692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:18.267 [2024-11-17 14:56:03.606700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:19:18.267 [2024-11-17 14:56:03.606707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.267 [2024-11-17 14:56:03.639129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.267 [2024-11-17 14:56:03.639178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:18.267 [2024-11-17 14:56:03.639191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.375 ms 00:19:18.267 [2024-11-17 14:56:03.639199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.267 [2024-11-17 14:56:03.639296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.267 [2024-11-17 14:56:03.639305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:18.267 [2024-11-17 14:56:03.639314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:18.267 [2024-11-17 14:56:03.639322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.267 [2024-11-17 14:56:03.682378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.267 [2024-11-17 14:56:03.682435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:18.267 [2024-11-17 14:56:03.682449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.996 ms 00:19:18.267 [2024-11-17 14:56:03.682459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.267 [2024-11-17 14:56:03.682509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.267 [2024-11-17 14:56:03.682519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:18.267 [2024-11-17 14:56:03.682528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:18.267 [2024-11-17 14:56:03.682546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.267 [2024-11-17 14:56:03.683179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.268 [2024-11-17 14:56:03.683205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:18.268 [2024-11-17 14:56:03.683217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:19:18.268 [2024-11-17 14:56:03.683225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.268 [2024-11-17 14:56:03.683387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.268 [2024-11-17 14:56:03.683398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:18.268 [2024-11-17 14:56:03.683407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:19:18.268 [2024-11-17 14:56:03.683418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.268 [2024-11-17 14:56:03.699287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.268 [2024-11-17 14:56:03.699334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.268 [2024-11-17 14:56:03.699348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.849 ms 00:19:18.268 [2024-11-17 14:56:03.699357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.268 [2024-11-17 14:56:03.714017] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:18.268 [2024-11-17 14:56:03.714069] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:18.268 [2024-11-17 14:56:03.714083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.268 [2024-11-17 14:56:03.714092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:18.268 [2024-11-17 14:56:03.714102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.616 ms 00:19:18.268 [2024-11-17 14:56:03.714109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.268 [2024-11-17 14:56:03.740241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.268 [2024-11-17 14:56:03.740474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:18.268 [2024-11-17 14:56:03.740499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.072 ms 00:19:18.268 [2024-11-17 14:56:03.740508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.268 [2024-11-17 14:56:03.753578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.268 [2024-11-17 14:56:03.753628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:18.268 [2024-11-17 14:56:03.753641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.924 ms 00:19:18.268 [2024-11-17 14:56:03.753648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.268 [2024-11-17 14:56:03.766697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.268 [2024-11-17 14:56:03.766746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:18.268 [2024-11-17 14:56:03.766759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.998 ms 00:19:18.268 [2024-11-17 14:56:03.766766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.268 [2024-11-17 14:56:03.767464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.268 [2024-11-17 14:56:03.767491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:18.268 [2024-11-17 14:56:03.767503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:19:18.268 [2024-11-17 14:56:03.767514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.529 [2024-11-17 14:56:03.833566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.529 [2024-11-17 14:56:03.833631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:18.529 [2024-11-17 14:56:03.833654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.031 ms 00:19:18.529 [2024-11-17 14:56:03.833663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.529 [2024-11-17 14:56:03.845082] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:18.529 [2024-11-17 14:56:03.848193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.529 [2024-11-17 14:56:03.848240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:18.529 [2024-11-17 14:56:03.848252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.471 ms 00:19:18.529 [2024-11-17 14:56:03.848261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.529 [2024-11-17 14:56:03.848351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.529 [2024-11-17 14:56:03.848363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:18.529 [2024-11-17 14:56:03.848373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:18.529 [2024-11-17 14:56:03.848384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.529 [2024-11-17 14:56:03.848457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.529 [2024-11-17 14:56:03.848467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:18.529 [2024-11-17 14:56:03.848477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:18.529 [2024-11-17 14:56:03.848486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.529 [2024-11-17 14:56:03.848506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.529 [2024-11-17 14:56:03.848515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:18.529 [2024-11-17 14:56:03.848524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:18.529 [2024-11-17 14:56:03.848533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.529 [2024-11-17 14:56:03.848568] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:18.529 [2024-11-17 14:56:03.848582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.529 [2024-11-17 14:56:03.848591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:18.529 [2024-11-17 14:56:03.848600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:18.529 [2024-11-17 14:56:03.848608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.529 [2024-11-17 14:56:03.874916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.529 [2024-11-17 14:56:03.874981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:18.529 [2024-11-17 14:56:03.874996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.289 ms 00:19:18.529 [2024-11-17 14:56:03.875010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.529 [2024-11-17 14:56:03.875104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.529 [2024-11-17 14:56:03.875115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:18.529 [2024-11-17 14:56:03.875124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:18.529 [2024-11-17 14:56:03.875132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.529 [2024-11-17 14:56:03.876388] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.159 ms, result 0 00:19:19.916  [2024-11-17T14:56:06.403Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-17T14:56:07.345Z] Copying: 43/1024 [MB] (21 MBps) [2024-11-17T14:56:08.290Z] Copying: 60/1024 [MB] (17 MBps) [2024-11-17T14:56:09.257Z] Copying: 73/1024 [MB] (12 MBps) [2024-11-17T14:56:10.205Z] Copying: 92/1024 [MB] (19 MBps) [2024-11-17T14:56:11.146Z] Copying: 103/1024 [MB] (10 MBps) [2024-11-17T14:56:12.087Z] Copying: 113/1024 [MB] (10 MBps) [2024-11-17T14:56:13.470Z] Copying: 124/1024 [MB] (10 MBps) [2024-11-17T14:56:14.412Z] Copying: 134/1024 [MB] (10 MBps) [2024-11-17T14:56:15.352Z] Copying: 145/1024 [MB] (10 MBps) [2024-11-17T14:56:16.294Z] Copying: 155/1024 [MB] (10 MBps) [2024-11-17T14:56:17.235Z] Copying: 166/1024 [MB] (10 MBps) [2024-11-17T14:56:18.178Z] Copying: 178/1024 [MB] (11 MBps) [2024-11-17T14:56:19.119Z] Copying: 188/1024 [MB] (10 MBps) [2024-11-17T14:56:20.500Z] Copying: 199/1024 [MB] (10 MBps) [2024-11-17T14:56:21.072Z] Copying: 213/1024 [MB] (13 MBps) [2024-11-17T14:56:22.460Z] Copying: 224/1024 [MB] (11 MBps) [2024-11-17T14:56:23.405Z] Copying: 238/1024 [MB] (14 MBps) [2024-11-17T14:56:24.350Z] Copying: 249/1024 [MB] (11 MBps) [2024-11-17T14:56:25.294Z] Copying: 263/1024 [MB] (13 MBps) [2024-11-17T14:56:26.239Z] Copying: 275/1024 [MB] (12 MBps) [2024-11-17T14:56:27.181Z] Copying: 286/1024 [MB] (10 MBps) [2024-11-17T14:56:28.126Z] Copying: 298/1024 [MB] (11 MBps) [2024-11-17T14:56:29.069Z] Copying: 314/1024 [MB] (16 MBps) [2024-11-17T14:56:30.452Z] Copying: 331/1024 [MB] (16 MBps) [2024-11-17T14:56:31.397Z] Copying: 347/1024 [MB] (16 MBps) [2024-11-17T14:56:32.340Z] Copying: 361/1024 [MB] (14 MBps) [2024-11-17T14:56:33.285Z] Copying: 381/1024 [MB] (20 MBps) [2024-11-17T14:56:34.228Z] Copying: 394/1024 [MB] (12 MBps) [2024-11-17T14:56:35.172Z] Copying: 412/1024 [MB] (17 MBps) [2024-11-17T14:56:36.117Z] Copying: 429/1024 [MB] (16 MBps) [2024-11-17T14:56:37.506Z] Copying: 439/1024 [MB] (10 MBps) [2024-11-17T14:56:38.079Z] Copying: 455/1024 [MB] (15 MBps) [2024-11-17T14:56:39.489Z] Copying: 467/1024 [MB] (11 MBps) [2024-11-17T14:56:40.429Z] Copying: 480/1024 [MB] (13 MBps) [2024-11-17T14:56:41.374Z] Copying: 497/1024 [MB] (17 MBps) [2024-11-17T14:56:42.317Z] Copying: 509/1024 [MB] (11 MBps) [2024-11-17T14:56:43.256Z] Copying: 520/1024 [MB] (11 MBps) [2024-11-17T14:56:44.201Z] Copying: 532/1024 [MB] (11 MBps) [2024-11-17T14:56:45.143Z] Copying: 542/1024 [MB] (10 MBps) [2024-11-17T14:56:46.087Z] Copying: 553/1024 [MB] (10 MBps) [2024-11-17T14:56:47.475Z] Copying: 564/1024 [MB] (10 MBps) [2024-11-17T14:56:48.419Z] Copying: 594/1024 [MB] (30 MBps) [2024-11-17T14:56:49.361Z] Copying: 605/1024 [MB] (10 MBps) [2024-11-17T14:56:50.303Z] Copying: 615/1024 [MB] (10 MBps) [2024-11-17T14:56:51.246Z] Copying: 626/1024 [MB] (10 MBps) [2024-11-17T14:56:52.187Z] Copying: 650/1024 [MB] (23 MBps) [2024-11-17T14:56:53.129Z] Copying: 672/1024 [MB] (21 MBps) [2024-11-17T14:56:54.073Z] Copying: 688/1024 [MB] (16 MBps) [2024-11-17T14:56:55.460Z] Copying: 703/1024 [MB] (15 MBps) [2024-11-17T14:56:56.402Z] Copying: 723/1024 [MB] (20 MBps) [2024-11-17T14:56:57.345Z] Copying: 737/1024 [MB] (14 MBps) [2024-11-17T14:56:58.288Z] Copying: 749/1024 [MB] (11 MBps) [2024-11-17T14:56:59.231Z] Copying: 763/1024 [MB] (14 MBps) [2024-11-17T14:57:00.173Z] Copying: 780/1024 [MB] (16 MBps) [2024-11-17T14:57:01.117Z] Copying: 803/1024 [MB] (23 MBps) [2024-11-17T14:57:02.505Z] Copying: 826/1024 [MB] (23 MBps) [2024-11-17T14:57:03.076Z] Copying: 840/1024 [MB] (13 MBps) [2024-11-17T14:57:04.463Z] Copying: 858/1024 [MB] (18 MBps) [2024-11-17T14:57:05.409Z] Copying: 880/1024 [MB] (21 MBps) [2024-11-17T14:57:06.361Z] Copying: 898/1024 [MB] (18 MBps) [2024-11-17T14:57:07.350Z] Copying: 924/1024 [MB] (25 MBps) [2024-11-17T14:57:08.313Z] Copying: 946/1024 [MB] (21 MBps) [2024-11-17T14:57:09.258Z] Copying: 967/1024 [MB] (21 MBps) [2024-11-17T14:57:10.199Z] Copying: 986/1024 [MB] (18 MBps) [2024-11-17T14:57:11.143Z] Copying: 1001/1024 [MB] (15 MBps) [2024-11-17T14:57:12.086Z] Copying: 1011/1024 [MB] (10 MBps) [2024-11-17T14:57:12.348Z] Copying: 1022/1024 [MB] (10 MBps) [2024-11-17T14:57:12.348Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-17 14:57:12.230318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.805 [2024-11-17 14:57:12.230403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:26.805 [2024-11-17 14:57:12.230424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:26.805 [2024-11-17 14:57:12.230438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.806 [2024-11-17 14:57:12.230469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:26.806 [2024-11-17 14:57:12.237151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.806 [2024-11-17 14:57:12.237220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:26.806 [2024-11-17 14:57:12.237253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.657 ms 00:20:26.806 [2024-11-17 14:57:12.237271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.806 [2024-11-17 14:57:12.237742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.806 [2024-11-17 14:57:12.237775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:26.806 [2024-11-17 14:57:12.237795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:20:26.806 [2024-11-17 14:57:12.237812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.806 [2024-11-17 14:57:12.244830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.806 [2024-11-17 14:57:12.244860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:26.806 [2024-11-17 14:57:12.244870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.989 ms 00:20:26.806 [2024-11-17 14:57:12.244880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.806 [2024-11-17 14:57:12.251063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.806 [2024-11-17 14:57:12.251099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:26.806 [2024-11-17 14:57:12.251110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.162 ms 00:20:26.806 [2024-11-17 14:57:12.251127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.806 [2024-11-17 14:57:12.277544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.806 [2024-11-17 14:57:12.277596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:26.806 [2024-11-17 14:57:12.277608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.362 ms 00:20:26.806 [2024-11-17 14:57:12.277616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.806 [2024-11-17 14:57:12.293465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.806 [2024-11-17 14:57:12.293528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:26.806 [2024-11-17 14:57:12.293541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.804 ms 00:20:26.806 [2024-11-17 14:57:12.293549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.806 [2024-11-17 14:57:12.293687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.806 [2024-11-17 14:57:12.293706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:26.806 [2024-11-17 14:57:12.293716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:26.806 [2024-11-17 14:57:12.293724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.806 [2024-11-17 14:57:12.319310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.806 [2024-11-17 14:57:12.319361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:26.806 [2024-11-17 14:57:12.319372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.571 ms 00:20:26.806 [2024-11-17 14:57:12.319379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.806 [2024-11-17 14:57:12.344281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.806 [2024-11-17 14:57:12.344338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:26.806 [2024-11-17 14:57:12.344349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.859 ms 00:20:26.806 [2024-11-17 14:57:12.344356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.068 [2024-11-17 14:57:12.368684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.068 [2024-11-17 14:57:12.368734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:27.068 [2024-11-17 14:57:12.368745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.286 ms 00:20:27.068 [2024-11-17 14:57:12.368752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.068 [2024-11-17 14:57:12.393029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.068 [2024-11-17 14:57:12.393071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:27.068 [2024-11-17 14:57:12.393083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.209 ms 00:20:27.068 [2024-11-17 14:57:12.393090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.068 [2024-11-17 14:57:12.393131] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:27.068 [2024-11-17 14:57:12.393146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:27.068 [2024-11-17 14:57:12.393372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:27.069 [2024-11-17 14:57:12.393956] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:27.069 [2024-11-17 14:57:12.393968] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c8ea8922-59e6-447b-9520-64154f0daa7c 00:20:27.069 [2024-11-17 14:57:12.393976] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:27.069 [2024-11-17 14:57:12.393983] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:27.069 [2024-11-17 14:57:12.393991] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:27.069 [2024-11-17 14:57:12.393999] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:27.069 [2024-11-17 14:57:12.394006] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:27.069 [2024-11-17 14:57:12.394014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:27.069 [2024-11-17 14:57:12.394030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:27.069 [2024-11-17 14:57:12.394037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:27.069 [2024-11-17 14:57:12.394044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:27.069 [2024-11-17 14:57:12.394051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.069 [2024-11-17 14:57:12.394059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:27.069 [2024-11-17 14:57:12.394068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:20:27.069 [2024-11-17 14:57:12.394076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.069 [2024-11-17 14:57:12.407668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.069 [2024-11-17 14:57:12.407711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:27.069 [2024-11-17 14:57:12.407724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.571 ms 00:20:27.069 [2024-11-17 14:57:12.407733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.069 [2024-11-17 14:57:12.408174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.069 [2024-11-17 14:57:12.408191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:27.069 [2024-11-17 14:57:12.408201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:20:27.070 [2024-11-17 14:57:12.408215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.444219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.070 [2024-11-17 14:57:12.444266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:27.070 [2024-11-17 14:57:12.444278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.070 [2024-11-17 14:57:12.444288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.444356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.070 [2024-11-17 14:57:12.444366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.070 [2024-11-17 14:57:12.444376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.070 [2024-11-17 14:57:12.444390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.444458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.070 [2024-11-17 14:57:12.444476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.070 [2024-11-17 14:57:12.444485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.070 [2024-11-17 14:57:12.444498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.444516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.070 [2024-11-17 14:57:12.444530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.070 [2024-11-17 14:57:12.444539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.070 [2024-11-17 14:57:12.444548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.527979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.070 [2024-11-17 14:57:12.528039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.070 [2024-11-17 14:57:12.528052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.070 [2024-11-17 14:57:12.528062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.596671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.070 [2024-11-17 14:57:12.596733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.070 [2024-11-17 14:57:12.596745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.070 [2024-11-17 14:57:12.596754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.596847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.070 [2024-11-17 14:57:12.596858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.070 [2024-11-17 14:57:12.596867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.070 [2024-11-17 14:57:12.596875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.596913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.070 [2024-11-17 14:57:12.596944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.070 [2024-11-17 14:57:12.596954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.070 [2024-11-17 14:57:12.596962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.597065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.070 [2024-11-17 14:57:12.597076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.070 [2024-11-17 14:57:12.597085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.070 [2024-11-17 14:57:12.597093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.597125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.070 [2024-11-17 14:57:12.597135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:27.070 [2024-11-17 14:57:12.597143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.070 [2024-11-17 14:57:12.597151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.597193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.070 [2024-11-17 14:57:12.597205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.070 [2024-11-17 14:57:12.597214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.070 [2024-11-17 14:57:12.597221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.597267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.070 [2024-11-17 14:57:12.597278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.070 [2024-11-17 14:57:12.597286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.070 [2024-11-17 14:57:12.597294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.070 [2024-11-17 14:57:12.597427] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.081 ms, result 0 00:20:28.014 00:20:28.014 00:20:28.014 14:57:13 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:30.563 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:30.563 14:57:15 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:30.563 [2024-11-17 14:57:15.623217] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:30.563 [2024-11-17 14:57:15.623335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75948 ] 00:20:30.564 [2024-11-17 14:57:15.778997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.564 [2024-11-17 14:57:15.901548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.825 [2024-11-17 14:57:16.187012] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:30.825 [2024-11-17 14:57:16.187093] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:30.825 [2024-11-17 14:57:16.347421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.825 [2024-11-17 14:57:16.347485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:30.825 [2024-11-17 14:57:16.347506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:30.825 [2024-11-17 14:57:16.347515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.825 [2024-11-17 14:57:16.347569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.825 [2024-11-17 14:57:16.347605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:30.825 [2024-11-17 14:57:16.347617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:30.825 [2024-11-17 14:57:16.347626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.825 [2024-11-17 14:57:16.347648] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:30.825 [2024-11-17 14:57:16.348389] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:30.825 [2024-11-17 14:57:16.348421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.825 [2024-11-17 14:57:16.348430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:30.825 [2024-11-17 14:57:16.348440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:20:30.825 [2024-11-17 14:57:16.348447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.825 [2024-11-17 14:57:16.350062] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:30.825 [2024-11-17 14:57:16.364182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.825 [2024-11-17 14:57:16.364234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:30.825 [2024-11-17 14:57:16.364247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.122 ms 00:20:30.825 [2024-11-17 14:57:16.364256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.825 [2024-11-17 14:57:16.364332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.825 [2024-11-17 14:57:16.364342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:30.825 [2024-11-17 14:57:16.364351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:30.825 [2024-11-17 14:57:16.364359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.088 [2024-11-17 14:57:16.372185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.088 [2024-11-17 14:57:16.372231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.088 [2024-11-17 14:57:16.372241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.750 ms 00:20:31.088 [2024-11-17 14:57:16.372249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.088 [2024-11-17 14:57:16.372331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.088 [2024-11-17 14:57:16.372341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.088 [2024-11-17 14:57:16.372350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:31.088 [2024-11-17 14:57:16.372357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.088 [2024-11-17 14:57:16.372401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.088 [2024-11-17 14:57:16.372412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:31.088 [2024-11-17 14:57:16.372420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:31.088 [2024-11-17 14:57:16.372428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.088 [2024-11-17 14:57:16.372451] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:31.088 [2024-11-17 14:57:16.376487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.088 [2024-11-17 14:57:16.376528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.088 [2024-11-17 14:57:16.376538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.040 ms 00:20:31.088 [2024-11-17 14:57:16.376548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.088 [2024-11-17 14:57:16.376590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.088 [2024-11-17 14:57:16.376599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:31.088 [2024-11-17 14:57:16.376608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:31.088 [2024-11-17 14:57:16.376616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.088 [2024-11-17 14:57:16.376664] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:31.088 [2024-11-17 14:57:16.376688] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:31.088 [2024-11-17 14:57:16.376724] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:31.088 [2024-11-17 14:57:16.376744] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:31.088 [2024-11-17 14:57:16.376850] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:31.088 [2024-11-17 14:57:16.376861] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:31.088 [2024-11-17 14:57:16.376872] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:31.088 [2024-11-17 14:57:16.376884] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:31.088 [2024-11-17 14:57:16.376893] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:31.088 [2024-11-17 14:57:16.376902] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:31.088 [2024-11-17 14:57:16.376910] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:31.088 [2024-11-17 14:57:16.376932] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:31.088 [2024-11-17 14:57:16.376942] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:31.088 [2024-11-17 14:57:16.376953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.088 [2024-11-17 14:57:16.376961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:31.088 [2024-11-17 14:57:16.376970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:20:31.088 [2024-11-17 14:57:16.376977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.088 [2024-11-17 14:57:16.377060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.088 [2024-11-17 14:57:16.377070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:31.088 [2024-11-17 14:57:16.377078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:31.088 [2024-11-17 14:57:16.377085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.088 [2024-11-17 14:57:16.377188] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:31.088 [2024-11-17 14:57:16.377209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:31.088 [2024-11-17 14:57:16.377219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.088 [2024-11-17 14:57:16.377227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.088 [2024-11-17 14:57:16.377235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:31.088 [2024-11-17 14:57:16.377242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:31.088 [2024-11-17 14:57:16.377249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:31.088 [2024-11-17 14:57:16.377256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:31.088 [2024-11-17 14:57:16.377263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:31.088 [2024-11-17 14:57:16.377269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.088 [2024-11-17 14:57:16.377276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:31.088 [2024-11-17 14:57:16.377283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:31.088 [2024-11-17 14:57:16.377290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.088 [2024-11-17 14:57:16.377297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:31.088 [2024-11-17 14:57:16.377306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:31.088 [2024-11-17 14:57:16.377320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.088 [2024-11-17 14:57:16.377327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:31.088 [2024-11-17 14:57:16.377334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:31.088 [2024-11-17 14:57:16.377340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.088 [2024-11-17 14:57:16.377347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:31.088 [2024-11-17 14:57:16.377354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:31.088 [2024-11-17 14:57:16.377360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.088 [2024-11-17 14:57:16.377368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:31.088 [2024-11-17 14:57:16.377375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:31.088 [2024-11-17 14:57:16.377382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.088 [2024-11-17 14:57:16.377389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:31.088 [2024-11-17 14:57:16.377396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:31.088 [2024-11-17 14:57:16.377403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.088 [2024-11-17 14:57:16.377410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:31.088 [2024-11-17 14:57:16.377418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:31.088 [2024-11-17 14:57:16.377425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.088 [2024-11-17 14:57:16.377431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:31.088 [2024-11-17 14:57:16.377438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:31.088 [2024-11-17 14:57:16.377444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.088 [2024-11-17 14:57:16.377451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:31.088 [2024-11-17 14:57:16.377457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:31.088 [2024-11-17 14:57:16.377463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.088 [2024-11-17 14:57:16.377470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:31.088 [2024-11-17 14:57:16.377477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:31.088 [2024-11-17 14:57:16.377483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.088 [2024-11-17 14:57:16.377489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:31.088 [2024-11-17 14:57:16.377495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:31.088 [2024-11-17 14:57:16.377501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.088 [2024-11-17 14:57:16.377507] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:31.089 [2024-11-17 14:57:16.377516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:31.089 [2024-11-17 14:57:16.377523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.089 [2024-11-17 14:57:16.377532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.089 [2024-11-17 14:57:16.377540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:31.089 [2024-11-17 14:57:16.377547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:31.089 [2024-11-17 14:57:16.377555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:31.089 [2024-11-17 14:57:16.377562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:31.089 [2024-11-17 14:57:16.377568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:31.089 [2024-11-17 14:57:16.377575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:31.089 [2024-11-17 14:57:16.377583] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:31.089 [2024-11-17 14:57:16.377591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.089 [2024-11-17 14:57:16.377600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:31.089 [2024-11-17 14:57:16.377608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:31.089 [2024-11-17 14:57:16.377615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:31.089 [2024-11-17 14:57:16.377622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:31.089 [2024-11-17 14:57:16.377629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:31.089 [2024-11-17 14:57:16.377636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:31.089 [2024-11-17 14:57:16.377644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:31.089 [2024-11-17 14:57:16.377651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:31.089 [2024-11-17 14:57:16.377658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:31.089 [2024-11-17 14:57:16.377665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:31.089 [2024-11-17 14:57:16.377673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:31.089 [2024-11-17 14:57:16.377680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:31.089 [2024-11-17 14:57:16.377687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:31.089 [2024-11-17 14:57:16.377695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:31.089 [2024-11-17 14:57:16.377702] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:31.089 [2024-11-17 14:57:16.377713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.089 [2024-11-17 14:57:16.377721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:31.089 [2024-11-17 14:57:16.377729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:31.089 [2024-11-17 14:57:16.377736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:31.089 [2024-11-17 14:57:16.377744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:31.089 [2024-11-17 14:57:16.377752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.377760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:31.089 [2024-11-17 14:57:16.377767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:20:31.089 [2024-11-17 14:57:16.377776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.409140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.409191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.089 [2024-11-17 14:57:16.409203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.320 ms 00:20:31.089 [2024-11-17 14:57:16.409211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.409304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.409312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:31.089 [2024-11-17 14:57:16.409322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:31.089 [2024-11-17 14:57:16.409330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.453981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.454049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.089 [2024-11-17 14:57:16.454063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.595 ms 00:20:31.089 [2024-11-17 14:57:16.454071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.454118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.454128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.089 [2024-11-17 14:57:16.454138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.089 [2024-11-17 14:57:16.454149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.454723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.454761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.089 [2024-11-17 14:57:16.454773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:20:31.089 [2024-11-17 14:57:16.454781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.454960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.454973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.089 [2024-11-17 14:57:16.454982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:20:31.089 [2024-11-17 14:57:16.454997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.470366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.470411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.089 [2024-11-17 14:57:16.470426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.349 ms 00:20:31.089 [2024-11-17 14:57:16.470434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.484477] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:31.089 [2024-11-17 14:57:16.484528] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:31.089 [2024-11-17 14:57:16.484541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.484550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:31.089 [2024-11-17 14:57:16.484559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.004 ms 00:20:31.089 [2024-11-17 14:57:16.484566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.509856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.509912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:31.089 [2024-11-17 14:57:16.509932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.239 ms 00:20:31.089 [2024-11-17 14:57:16.509941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.522715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.522763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:31.089 [2024-11-17 14:57:16.522774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.723 ms 00:20:31.089 [2024-11-17 14:57:16.522781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.535007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.535054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:31.089 [2024-11-17 14:57:16.535066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.182 ms 00:20:31.089 [2024-11-17 14:57:16.535073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.535734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.535769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:31.089 [2024-11-17 14:57:16.535781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:20:31.089 [2024-11-17 14:57:16.535792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.598653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.598720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:31.089 [2024-11-17 14:57:16.598743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.841 ms 00:20:31.089 [2024-11-17 14:57:16.598752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.609818] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:31.089 [2024-11-17 14:57:16.612618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.612658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:31.089 [2024-11-17 14:57:16.612670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.810 ms 00:20:31.089 [2024-11-17 14:57:16.612679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.612757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.612768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:31.089 [2024-11-17 14:57:16.612778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:31.089 [2024-11-17 14:57:16.612789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.089 [2024-11-17 14:57:16.612858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.089 [2024-11-17 14:57:16.612870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:31.089 [2024-11-17 14:57:16.612879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:31.089 [2024-11-17 14:57:16.612886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.090 [2024-11-17 14:57:16.612907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.090 [2024-11-17 14:57:16.612915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:31.090 [2024-11-17 14:57:16.612942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:31.090 [2024-11-17 14:57:16.612950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.090 [2024-11-17 14:57:16.612985] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:31.090 [2024-11-17 14:57:16.612997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.090 [2024-11-17 14:57:16.613006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:31.090 [2024-11-17 14:57:16.613014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:31.090 [2024-11-17 14:57:16.613023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.350 [2024-11-17 14:57:16.638222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.350 [2024-11-17 14:57:16.638273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:31.350 [2024-11-17 14:57:16.638286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.179 ms 00:20:31.350 [2024-11-17 14:57:16.638300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.350 [2024-11-17 14:57:16.638391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.350 [2024-11-17 14:57:16.638402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:31.350 [2024-11-17 14:57:16.638412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:31.350 [2024-11-17 14:57:16.638420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.351 [2024-11-17 14:57:16.639646] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.740 ms, result 0 00:20:32.293  [2024-11-17T14:57:18.780Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-17T14:57:19.746Z] Copying: 33/1024 [MB] (18 MBps) [2024-11-17T14:57:20.690Z] Copying: 54/1024 [MB] (20 MBps) [2024-11-17T14:57:22.076Z] Copying: 68/1024 [MB] (14 MBps) [2024-11-17T14:57:23.020Z] Copying: 93/1024 [MB] (24 MBps) [2024-11-17T14:57:23.964Z] Copying: 137/1024 [MB] (44 MBps) [2024-11-17T14:57:24.906Z] Copying: 158/1024 [MB] (20 MBps) [2024-11-17T14:57:25.852Z] Copying: 172/1024 [MB] (14 MBps) [2024-11-17T14:57:26.796Z] Copying: 191/1024 [MB] (18 MBps) [2024-11-17T14:57:27.739Z] Copying: 225/1024 [MB] (34 MBps) [2024-11-17T14:57:28.683Z] Copying: 275/1024 [MB] (49 MBps) [2024-11-17T14:57:30.074Z] Copying: 295/1024 [MB] (20 MBps) [2024-11-17T14:57:31.018Z] Copying: 314/1024 [MB] (19 MBps) [2024-11-17T14:57:31.962Z] Copying: 360/1024 [MB] (45 MBps) [2024-11-17T14:57:32.908Z] Copying: 389/1024 [MB] (29 MBps) [2024-11-17T14:57:33.851Z] Copying: 410/1024 [MB] (20 MBps) [2024-11-17T14:57:34.794Z] Copying: 429/1024 [MB] (19 MBps) [2024-11-17T14:57:35.801Z] Copying: 443/1024 [MB] (14 MBps) [2024-11-17T14:57:36.745Z] Copying: 461/1024 [MB] (17 MBps) [2024-11-17T14:57:37.689Z] Copying: 473/1024 [MB] (12 MBps) [2024-11-17T14:57:39.075Z] Copying: 493/1024 [MB] (19 MBps) [2024-11-17T14:57:40.017Z] Copying: 513/1024 [MB] (20 MBps) [2024-11-17T14:57:40.958Z] Copying: 524/1024 [MB] (10 MBps) [2024-11-17T14:57:41.901Z] Copying: 539/1024 [MB] (15 MBps) [2024-11-17T14:57:42.845Z] Copying: 573/1024 [MB] (33 MBps) [2024-11-17T14:57:43.788Z] Copying: 613/1024 [MB] (40 MBps) [2024-11-17T14:57:44.729Z] Copying: 644/1024 [MB] (31 MBps) [2024-11-17T14:57:45.672Z] Copying: 675/1024 [MB] (31 MBps) [2024-11-17T14:57:47.058Z] Copying: 695/1024 [MB] (19 MBps) [2024-11-17T14:57:48.001Z] Copying: 740/1024 [MB] (44 MBps) [2024-11-17T14:57:48.943Z] Copying: 763/1024 [MB] (22 MBps) [2024-11-17T14:57:49.887Z] Copying: 785/1024 [MB] (22 MBps) [2024-11-17T14:57:50.831Z] Copying: 805/1024 [MB] (19 MBps) [2024-11-17T14:57:51.774Z] Copying: 827/1024 [MB] (21 MBps) [2024-11-17T14:57:52.719Z] Copying: 848/1024 [MB] (21 MBps) [2024-11-17T14:57:53.661Z] Copying: 872/1024 [MB] (23 MBps) [2024-11-17T14:57:55.047Z] Copying: 901/1024 [MB] (29 MBps) [2024-11-17T14:57:55.992Z] Copying: 935/1024 [MB] (33 MBps) [2024-11-17T14:57:56.934Z] Copying: 950/1024 [MB] (15 MBps) [2024-11-17T14:57:57.875Z] Copying: 971/1024 [MB] (21 MBps) [2024-11-17T14:57:58.819Z] Copying: 1003/1024 [MB] (32 MBps) [2024-11-17T14:57:59.391Z] Copying: 1023/1024 [MB] (19 MBps) [2024-11-17T14:57:59.391Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-17 14:57:59.313197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.848 [2024-11-17 14:57:59.313275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:13.848 [2024-11-17 14:57:59.313294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:13.848 [2024-11-17 14:57:59.313312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.848 [2024-11-17 14:57:59.313737] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:13.848 [2024-11-17 14:57:59.316778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.848 [2024-11-17 14:57:59.316828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:13.848 [2024-11-17 14:57:59.316840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.013 ms 00:21:13.848 [2024-11-17 14:57:59.316850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.848 [2024-11-17 14:57:59.329300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.848 [2024-11-17 14:57:59.329359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:13.848 [2024-11-17 14:57:59.329374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.911 ms 00:21:13.848 [2024-11-17 14:57:59.329383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.848 [2024-11-17 14:57:59.354867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.848 [2024-11-17 14:57:59.354932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:13.848 [2024-11-17 14:57:59.354944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.457 ms 00:21:13.848 [2024-11-17 14:57:59.354954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.848 [2024-11-17 14:57:59.361090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.848 [2024-11-17 14:57:59.361130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:13.848 [2024-11-17 14:57:59.361142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.098 ms 00:21:13.848 [2024-11-17 14:57:59.361150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.848 [2024-11-17 14:57:59.388125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.848 [2024-11-17 14:57:59.388172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:13.848 [2024-11-17 14:57:59.388185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.923 ms 00:21:13.848 [2024-11-17 14:57:59.388194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.108 [2024-11-17 14:57:59.403507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.108 [2024-11-17 14:57:59.403560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:14.108 [2024-11-17 14:57:59.403573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.270 ms 00:21:14.108 [2024-11-17 14:57:59.403582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.369 [2024-11-17 14:57:59.704365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.369 [2024-11-17 14:57:59.704428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:14.369 [2024-11-17 14:57:59.704440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 300.722 ms 00:21:14.369 [2024-11-17 14:57:59.704447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.369 [2024-11-17 14:57:59.724937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.369 [2024-11-17 14:57:59.724973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:14.369 [2024-11-17 14:57:59.724983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.477 ms 00:21:14.369 [2024-11-17 14:57:59.724990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.369 [2024-11-17 14:57:59.744359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.369 [2024-11-17 14:57:59.744398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:14.369 [2024-11-17 14:57:59.744407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.335 ms 00:21:14.369 [2024-11-17 14:57:59.744413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.369 [2024-11-17 14:57:59.762876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.369 [2024-11-17 14:57:59.762906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:14.369 [2024-11-17 14:57:59.762915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.431 ms 00:21:14.369 [2024-11-17 14:57:59.762928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.369 [2024-11-17 14:57:59.780724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.369 [2024-11-17 14:57:59.780751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:14.369 [2024-11-17 14:57:59.780759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.750 ms 00:21:14.369 [2024-11-17 14:57:59.780765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.369 [2024-11-17 14:57:59.780792] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:14.369 [2024-11-17 14:57:59.780803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103168 / 261120 wr_cnt: 1 state: open 00:21:14.369 [2024-11-17 14:57:59.780811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:14.369 [2024-11-17 14:57:59.780978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.780984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.780991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.780997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:14.370 [2024-11-17 14:57:59.781409] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:14.370 [2024-11-17 14:57:59.781415] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c8ea8922-59e6-447b-9520-64154f0daa7c 00:21:14.370 [2024-11-17 14:57:59.781421] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103168 00:21:14.370 [2024-11-17 14:57:59.781427] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104128 00:21:14.370 [2024-11-17 14:57:59.781432] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103168 00:21:14.370 [2024-11-17 14:57:59.781441] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:21:14.370 [2024-11-17 14:57:59.781449] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:14.370 [2024-11-17 14:57:59.781462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:14.370 [2024-11-17 14:57:59.781473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:14.370 [2024-11-17 14:57:59.781478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:14.370 [2024-11-17 14:57:59.781483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:14.370 [2024-11-17 14:57:59.781489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.370 [2024-11-17 14:57:59.781495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:14.370 [2024-11-17 14:57:59.781502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:21:14.370 [2024-11-17 14:57:59.781508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.370 [2024-11-17 14:57:59.790987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.370 [2024-11-17 14:57:59.791012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:14.370 [2024-11-17 14:57:59.791024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.455 ms 00:21:14.370 [2024-11-17 14:57:59.791033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.370 [2024-11-17 14:57:59.791304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.371 [2024-11-17 14:57:59.791313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:14.371 [2024-11-17 14:57:59.791320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:21:14.371 [2024-11-17 14:57:59.791325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.371 [2024-11-17 14:57:59.817455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.371 [2024-11-17 14:57:59.817481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:14.371 [2024-11-17 14:57:59.817491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.371 [2024-11-17 14:57:59.817497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.371 [2024-11-17 14:57:59.817541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.371 [2024-11-17 14:57:59.817548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:14.371 [2024-11-17 14:57:59.817554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.371 [2024-11-17 14:57:59.817560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.371 [2024-11-17 14:57:59.817598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.371 [2024-11-17 14:57:59.817605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:14.371 [2024-11-17 14:57:59.817612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.371 [2024-11-17 14:57:59.817620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.371 [2024-11-17 14:57:59.817630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.371 [2024-11-17 14:57:59.817637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:14.371 [2024-11-17 14:57:59.817644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.371 [2024-11-17 14:57:59.817651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.371 [2024-11-17 14:57:59.876365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.371 [2024-11-17 14:57:59.876397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:14.371 [2024-11-17 14:57:59.876409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.371 [2024-11-17 14:57:59.876415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-11-17 14:57:59.925122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-11-17 14:57:59.925153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:14.631 [2024-11-17 14:57:59.925161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-11-17 14:57:59.925167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-11-17 14:57:59.925204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-11-17 14:57:59.925211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:14.631 [2024-11-17 14:57:59.925218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-11-17 14:57:59.925223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-11-17 14:57:59.925267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-11-17 14:57:59.925274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:14.631 [2024-11-17 14:57:59.925280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-11-17 14:57:59.925287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-11-17 14:57:59.925445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-11-17 14:57:59.925455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:14.631 [2024-11-17 14:57:59.925461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-11-17 14:57:59.925466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-11-17 14:57:59.925493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-11-17 14:57:59.925501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:14.631 [2024-11-17 14:57:59.925507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-11-17 14:57:59.925513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-11-17 14:57:59.925539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-11-17 14:57:59.925545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:14.631 [2024-11-17 14:57:59.925551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-11-17 14:57:59.925557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-11-17 14:57:59.925589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-11-17 14:57:59.925597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:14.631 [2024-11-17 14:57:59.925604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-11-17 14:57:59.925611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-11-17 14:57:59.925704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 613.844 ms, result 0 00:21:16.017 00:21:16.017 00:21:16.017 14:58:01 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:16.017 [2024-11-17 14:58:01.403515] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:21:16.017 [2024-11-17 14:58:01.403654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76415 ] 00:21:16.278 [2024-11-17 14:58:01.560111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.278 [2024-11-17 14:58:01.637259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.540 [2024-11-17 14:58:01.841735] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:16.540 [2024-11-17 14:58:01.841777] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:16.540 [2024-11-17 14:58:01.995691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.540 [2024-11-17 14:58:01.995724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:16.540 [2024-11-17 14:58:01.995738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:16.540 [2024-11-17 14:58:01.995745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.540 [2024-11-17 14:58:01.995780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.540 [2024-11-17 14:58:01.995788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:16.540 [2024-11-17 14:58:01.995796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:16.540 [2024-11-17 14:58:01.995802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.540 [2024-11-17 14:58:01.995815] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:16.540 [2024-11-17 14:58:01.996379] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:16.540 [2024-11-17 14:58:01.996396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.540 [2024-11-17 14:58:01.996403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:16.540 [2024-11-17 14:58:01.996409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:21:16.540 [2024-11-17 14:58:01.996415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.540 [2024-11-17 14:58:01.997339] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:16.540 [2024-11-17 14:58:02.007171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.540 [2024-11-17 14:58:02.007196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:16.540 [2024-11-17 14:58:02.007205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.833 ms 00:21:16.540 [2024-11-17 14:58:02.007211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.540 [2024-11-17 14:58:02.007255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.540 [2024-11-17 14:58:02.007262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:16.540 [2024-11-17 14:58:02.007269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:16.540 [2024-11-17 14:58:02.007274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.540 [2024-11-17 14:58:02.011524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.540 [2024-11-17 14:58:02.011548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:16.540 [2024-11-17 14:58:02.011555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.206 ms 00:21:16.540 [2024-11-17 14:58:02.011560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.540 [2024-11-17 14:58:02.011621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.540 [2024-11-17 14:58:02.011628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:16.540 [2024-11-17 14:58:02.011634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:16.540 [2024-11-17 14:58:02.011640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.540 [2024-11-17 14:58:02.011672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.540 [2024-11-17 14:58:02.011682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:16.540 [2024-11-17 14:58:02.011688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:16.540 [2024-11-17 14:58:02.011694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.540 [2024-11-17 14:58:02.011709] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:16.540 [2024-11-17 14:58:02.014309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.540 [2024-11-17 14:58:02.014332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:16.540 [2024-11-17 14:58:02.014339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:21:16.540 [2024-11-17 14:58:02.014347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.540 [2024-11-17 14:58:02.014373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.540 [2024-11-17 14:58:02.014379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:16.540 [2024-11-17 14:58:02.014386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:16.540 [2024-11-17 14:58:02.014392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.540 [2024-11-17 14:58:02.014405] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:16.540 [2024-11-17 14:58:02.014419] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:16.540 [2024-11-17 14:58:02.014447] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:16.540 [2024-11-17 14:58:02.014460] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:16.540 [2024-11-17 14:58:02.014539] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:16.540 [2024-11-17 14:58:02.014549] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:16.540 [2024-11-17 14:58:02.014557] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:16.540 [2024-11-17 14:58:02.014564] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:16.540 [2024-11-17 14:58:02.014572] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:16.540 [2024-11-17 14:58:02.014578] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:16.541 [2024-11-17 14:58:02.014583] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:16.541 [2024-11-17 14:58:02.014588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:16.541 [2024-11-17 14:58:02.014594] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:16.541 [2024-11-17 14:58:02.014602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.541 [2024-11-17 14:58:02.014608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:16.541 [2024-11-17 14:58:02.014615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:21:16.541 [2024-11-17 14:58:02.014620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.541 [2024-11-17 14:58:02.014683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.541 [2024-11-17 14:58:02.014690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:16.541 [2024-11-17 14:58:02.014696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:16.541 [2024-11-17 14:58:02.014702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.541 [2024-11-17 14:58:02.014776] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:16.541 [2024-11-17 14:58:02.014788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:16.541 [2024-11-17 14:58:02.014795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:16.541 [2024-11-17 14:58:02.014802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.541 [2024-11-17 14:58:02.014809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:16.541 [2024-11-17 14:58:02.014814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:16.541 [2024-11-17 14:58:02.014820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:16.541 [2024-11-17 14:58:02.014826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:16.541 [2024-11-17 14:58:02.014831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:16.541 [2024-11-17 14:58:02.014836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:16.541 [2024-11-17 14:58:02.014842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:16.541 [2024-11-17 14:58:02.014847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:16.541 [2024-11-17 14:58:02.014852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:16.541 [2024-11-17 14:58:02.014857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:16.541 [2024-11-17 14:58:02.014863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:16.541 [2024-11-17 14:58:02.014872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.541 [2024-11-17 14:58:02.014877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:16.541 [2024-11-17 14:58:02.014882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:16.541 [2024-11-17 14:58:02.014886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.541 [2024-11-17 14:58:02.014891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:16.541 [2024-11-17 14:58:02.014897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:16.541 [2024-11-17 14:58:02.014901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:16.541 [2024-11-17 14:58:02.014906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:16.541 [2024-11-17 14:58:02.014911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:16.541 [2024-11-17 14:58:02.014917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:16.541 [2024-11-17 14:58:02.014933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:16.541 [2024-11-17 14:58:02.014938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:16.541 [2024-11-17 14:58:02.014943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:16.541 [2024-11-17 14:58:02.014948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:16.541 [2024-11-17 14:58:02.014954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:16.541 [2024-11-17 14:58:02.014959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:16.541 [2024-11-17 14:58:02.014964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:16.541 [2024-11-17 14:58:02.014970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:16.541 [2024-11-17 14:58:02.014975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:16.541 [2024-11-17 14:58:02.014980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:16.541 [2024-11-17 14:58:02.014985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:16.541 [2024-11-17 14:58:02.014991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:16.541 [2024-11-17 14:58:02.014996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:16.541 [2024-11-17 14:58:02.015001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:16.541 [2024-11-17 14:58:02.015006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.541 [2024-11-17 14:58:02.015011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:16.541 [2024-11-17 14:58:02.015016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:16.541 [2024-11-17 14:58:02.015022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.541 [2024-11-17 14:58:02.015028] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:16.541 [2024-11-17 14:58:02.015033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:16.541 [2024-11-17 14:58:02.015039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:16.541 [2024-11-17 14:58:02.015044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.541 [2024-11-17 14:58:02.015051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:16.541 [2024-11-17 14:58:02.015056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:16.541 [2024-11-17 14:58:02.015061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:16.541 [2024-11-17 14:58:02.015066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:16.541 [2024-11-17 14:58:02.015071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:16.541 [2024-11-17 14:58:02.015076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:16.541 [2024-11-17 14:58:02.015082] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:16.541 [2024-11-17 14:58:02.015088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:16.541 [2024-11-17 14:58:02.015094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:16.541 [2024-11-17 14:58:02.015100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:16.541 [2024-11-17 14:58:02.015106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:16.541 [2024-11-17 14:58:02.015114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:16.541 [2024-11-17 14:58:02.015119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:16.541 [2024-11-17 14:58:02.015124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:16.541 [2024-11-17 14:58:02.015129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:16.541 [2024-11-17 14:58:02.015134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:16.541 [2024-11-17 14:58:02.015139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:16.541 [2024-11-17 14:58:02.015145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:16.541 [2024-11-17 14:58:02.015150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:16.541 [2024-11-17 14:58:02.015155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:16.541 [2024-11-17 14:58:02.015161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:16.541 [2024-11-17 14:58:02.015166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:16.541 [2024-11-17 14:58:02.015171] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:16.541 [2024-11-17 14:58:02.015178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:16.541 [2024-11-17 14:58:02.015184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:16.541 [2024-11-17 14:58:02.015189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:16.541 [2024-11-17 14:58:02.015195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:16.541 [2024-11-17 14:58:02.015201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:16.541 [2024-11-17 14:58:02.015207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.541 [2024-11-17 14:58:02.015213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:16.541 [2024-11-17 14:58:02.015219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:21:16.541 [2024-11-17 14:58:02.015224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.541 [2024-11-17 14:58:02.035812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.541 [2024-11-17 14:58:02.035838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:16.541 [2024-11-17 14:58:02.035846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.557 ms 00:21:16.541 [2024-11-17 14:58:02.035853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.541 [2024-11-17 14:58:02.035916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.541 [2024-11-17 14:58:02.035932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:16.541 [2024-11-17 14:58:02.035938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:16.541 [2024-11-17 14:58:02.035944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.541 [2024-11-17 14:58:02.080134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.541 [2024-11-17 14:58:02.080164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:16.542 [2024-11-17 14:58:02.080173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.150 ms 00:21:16.542 [2024-11-17 14:58:02.080179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.542 [2024-11-17 14:58:02.080209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.542 [2024-11-17 14:58:02.080216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:16.542 [2024-11-17 14:58:02.080223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:16.542 [2024-11-17 14:58:02.080231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.803 [2024-11-17 14:58:02.080530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.803 [2024-11-17 14:58:02.080544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:16.803 [2024-11-17 14:58:02.080552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:21:16.803 [2024-11-17 14:58:02.080558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.803 [2024-11-17 14:58:02.080653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.803 [2024-11-17 14:58:02.080672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:16.803 [2024-11-17 14:58:02.080679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:16.803 [2024-11-17 14:58:02.080685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.803 [2024-11-17 14:58:02.091050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.803 [2024-11-17 14:58:02.091074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:16.803 [2024-11-17 14:58:02.091082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.346 ms 00:21:16.803 [2024-11-17 14:58:02.091090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.803 [2024-11-17 14:58:02.101055] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:16.803 [2024-11-17 14:58:02.101080] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:16.803 [2024-11-17 14:58:02.101089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.803 [2024-11-17 14:58:02.101096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:16.803 [2024-11-17 14:58:02.101103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.921 ms 00:21:16.803 [2024-11-17 14:58:02.101108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.803 [2024-11-17 14:58:02.119653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.803 [2024-11-17 14:58:02.119682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:16.803 [2024-11-17 14:58:02.119690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.515 ms 00:21:16.803 [2024-11-17 14:58:02.119696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.803 [2024-11-17 14:58:02.128814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.803 [2024-11-17 14:58:02.128843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:16.803 [2024-11-17 14:58:02.128850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.091 ms 00:21:16.803 [2024-11-17 14:58:02.128856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.803 [2024-11-17 14:58:02.137687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.803 [2024-11-17 14:58:02.137710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:16.803 [2024-11-17 14:58:02.137717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.806 ms 00:21:16.803 [2024-11-17 14:58:02.137722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.803 [2024-11-17 14:58:02.138182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.803 [2024-11-17 14:58:02.138198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:16.803 [2024-11-17 14:58:02.138205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:21:16.803 [2024-11-17 14:58:02.138214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.803 [2024-11-17 14:58:02.182025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.803 [2024-11-17 14:58:02.182061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:16.803 [2024-11-17 14:58:02.182074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.797 ms 00:21:16.803 [2024-11-17 14:58:02.182081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.803 [2024-11-17 14:58:02.189753] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:16.803 [2024-11-17 14:58:02.191336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.803 [2024-11-17 14:58:02.191359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:16.803 [2024-11-17 14:58:02.191367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.223 ms 00:21:16.803 [2024-11-17 14:58:02.191373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.803 [2024-11-17 14:58:02.191422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.803 [2024-11-17 14:58:02.191431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:16.803 [2024-11-17 14:58:02.191438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:16.803 [2024-11-17 14:58:02.191445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.804 [2024-11-17 14:58:02.192476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.804 [2024-11-17 14:58:02.192501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:16.804 [2024-11-17 14:58:02.192509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:21:16.804 [2024-11-17 14:58:02.192515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.804 [2024-11-17 14:58:02.192532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.804 [2024-11-17 14:58:02.192539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:16.804 [2024-11-17 14:58:02.192545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:16.804 [2024-11-17 14:58:02.192551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.804 [2024-11-17 14:58:02.192588] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:16.804 [2024-11-17 14:58:02.192598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.804 [2024-11-17 14:58:02.192604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:16.804 [2024-11-17 14:58:02.192610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:16.804 [2024-11-17 14:58:02.192616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.804 [2024-11-17 14:58:02.210795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.804 [2024-11-17 14:58:02.210820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:16.804 [2024-11-17 14:58:02.210829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.166 ms 00:21:16.804 [2024-11-17 14:58:02.210838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.804 [2024-11-17 14:58:02.210891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.804 [2024-11-17 14:58:02.210898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:16.804 [2024-11-17 14:58:02.210905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:16.804 [2024-11-17 14:58:02.210911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.804 [2024-11-17 14:58:02.211662] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 215.584 ms, result 0 00:21:18.192  [2024-11-17T14:58:04.371Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-17T14:58:05.756Z] Copying: 23/1024 [MB] (10 MBps) [2024-11-17T14:58:06.698Z] Copying: 43/1024 [MB] (19 MBps) [2024-11-17T14:58:07.641Z] Copying: 62/1024 [MB] (19 MBps) [2024-11-17T14:58:08.584Z] Copying: 76/1024 [MB] (13 MBps) [2024-11-17T14:58:09.525Z] Copying: 93/1024 [MB] (17 MBps) [2024-11-17T14:58:10.468Z] Copying: 109/1024 [MB] (15 MBps) [2024-11-17T14:58:11.410Z] Copying: 128/1024 [MB] (18 MBps) [2024-11-17T14:58:12.354Z] Copying: 143/1024 [MB] (15 MBps) [2024-11-17T14:58:13.740Z] Copying: 162/1024 [MB] (19 MBps) [2024-11-17T14:58:14.685Z] Copying: 182/1024 [MB] (20 MBps) [2024-11-17T14:58:15.627Z] Copying: 195/1024 [MB] (12 MBps) [2024-11-17T14:58:16.569Z] Copying: 206/1024 [MB] (10 MBps) [2024-11-17T14:58:17.512Z] Copying: 216/1024 [MB] (10 MBps) [2024-11-17T14:58:18.455Z] Copying: 226/1024 [MB] (10 MBps) [2024-11-17T14:58:19.397Z] Copying: 236/1024 [MB] (10 MBps) [2024-11-17T14:58:20.783Z] Copying: 247/1024 [MB] (10 MBps) [2024-11-17T14:58:21.357Z] Copying: 257/1024 [MB] (10 MBps) [2024-11-17T14:58:22.748Z] Copying: 268/1024 [MB] (10 MBps) [2024-11-17T14:58:23.693Z] Copying: 278/1024 [MB] (10 MBps) [2024-11-17T14:58:24.637Z] Copying: 289/1024 [MB] (10 MBps) [2024-11-17T14:58:25.583Z] Copying: 300/1024 [MB] (10 MBps) [2024-11-17T14:58:26.529Z] Copying: 310/1024 [MB] (10 MBps) [2024-11-17T14:58:27.476Z] Copying: 321/1024 [MB] (10 MBps) [2024-11-17T14:58:28.421Z] Copying: 338/1024 [MB] (16 MBps) [2024-11-17T14:58:29.372Z] Copying: 352/1024 [MB] (13 MBps) [2024-11-17T14:58:30.760Z] Copying: 363/1024 [MB] (10 MBps) [2024-11-17T14:58:31.705Z] Copying: 380/1024 [MB] (17 MBps) [2024-11-17T14:58:32.649Z] Copying: 396/1024 [MB] (16 MBps) [2024-11-17T14:58:33.656Z] Copying: 409/1024 [MB] (12 MBps) [2024-11-17T14:58:34.610Z] Copying: 422/1024 [MB] (12 MBps) [2024-11-17T14:58:35.552Z] Copying: 433/1024 [MB] (11 MBps) [2024-11-17T14:58:36.494Z] Copying: 446/1024 [MB] (13 MBps) [2024-11-17T14:58:37.439Z] Copying: 457/1024 [MB] (10 MBps) [2024-11-17T14:58:38.380Z] Copying: 472/1024 [MB] (15 MBps) [2024-11-17T14:58:39.767Z] Copying: 487/1024 [MB] (14 MBps) [2024-11-17T14:58:40.713Z] Copying: 504/1024 [MB] (17 MBps) [2024-11-17T14:58:41.657Z] Copying: 520/1024 [MB] (15 MBps) [2024-11-17T14:58:42.603Z] Copying: 536/1024 [MB] (16 MBps) [2024-11-17T14:58:43.547Z] Copying: 553/1024 [MB] (16 MBps) [2024-11-17T14:58:44.490Z] Copying: 565/1024 [MB] (12 MBps) [2024-11-17T14:58:45.434Z] Copying: 583/1024 [MB] (17 MBps) [2024-11-17T14:58:46.378Z] Copying: 598/1024 [MB] (14 MBps) [2024-11-17T14:58:47.767Z] Copying: 612/1024 [MB] (14 MBps) [2024-11-17T14:58:48.712Z] Copying: 627/1024 [MB] (15 MBps) [2024-11-17T14:58:49.665Z] Copying: 648/1024 [MB] (20 MBps) [2024-11-17T14:58:50.608Z] Copying: 659/1024 [MB] (11 MBps) [2024-11-17T14:58:51.552Z] Copying: 676/1024 [MB] (16 MBps) [2024-11-17T14:58:52.495Z] Copying: 686/1024 [MB] (10 MBps) [2024-11-17T14:58:53.437Z] Copying: 701/1024 [MB] (14 MBps) [2024-11-17T14:58:54.381Z] Copying: 718/1024 [MB] (17 MBps) [2024-11-17T14:58:55.765Z] Copying: 733/1024 [MB] (15 MBps) [2024-11-17T14:58:56.709Z] Copying: 747/1024 [MB] (13 MBps) [2024-11-17T14:58:57.651Z] Copying: 764/1024 [MB] (17 MBps) [2024-11-17T14:58:58.594Z] Copying: 782/1024 [MB] (18 MBps) [2024-11-17T14:58:59.541Z] Copying: 796/1024 [MB] (14 MBps) [2024-11-17T14:59:00.485Z] Copying: 811/1024 [MB] (14 MBps) [2024-11-17T14:59:01.428Z] Copying: 825/1024 [MB] (14 MBps) [2024-11-17T14:59:02.461Z] Copying: 841/1024 [MB] (15 MBps) [2024-11-17T14:59:03.411Z] Copying: 852/1024 [MB] (10 MBps) [2024-11-17T14:59:04.355Z] Copying: 863/1024 [MB] (11 MBps) [2024-11-17T14:59:05.741Z] Copying: 888/1024 [MB] (24 MBps) [2024-11-17T14:59:06.684Z] Copying: 899/1024 [MB] (10 MBps) [2024-11-17T14:59:07.628Z] Copying: 910/1024 [MB] (10 MBps) [2024-11-17T14:59:08.573Z] Copying: 920/1024 [MB] (10 MBps) [2024-11-17T14:59:09.516Z] Copying: 931/1024 [MB] (10 MBps) [2024-11-17T14:59:10.459Z] Copying: 947/1024 [MB] (15 MBps) [2024-11-17T14:59:11.402Z] Copying: 962/1024 [MB] (15 MBps) [2024-11-17T14:59:12.789Z] Copying: 980/1024 [MB] (18 MBps) [2024-11-17T14:59:13.362Z] Copying: 991/1024 [MB] (10 MBps) [2024-11-17T14:59:14.747Z] Copying: 1002/1024 [MB] (10 MBps) [2024-11-17T14:59:15.692Z] Copying: 1013/1024 [MB] (10 MBps) [2024-11-17T14:59:15.692Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-17 14:59:15.453379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.149 [2024-11-17 14:59:15.453812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:30.149 [2024-11-17 14:59:15.453959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:30.149 [2024-11-17 14:59:15.454012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.149 [2024-11-17 14:59:15.454112] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:30.149 [2024-11-17 14:59:15.459914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.149 [2024-11-17 14:59:15.460115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:30.149 [2024-11-17 14:59:15.460195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.722 ms 00:22:30.149 [2024-11-17 14:59:15.460225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.149 [2024-11-17 14:59:15.460552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.149 [2024-11-17 14:59:15.460588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:30.149 [2024-11-17 14:59:15.460614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:22:30.149 [2024-11-17 14:59:15.460689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.149 [2024-11-17 14:59:15.468164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.149 [2024-11-17 14:59:15.468333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:30.149 [2024-11-17 14:59:15.468354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.427 ms 00:22:30.149 [2024-11-17 14:59:15.468365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.149 [2024-11-17 14:59:15.474757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.149 [2024-11-17 14:59:15.474892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:30.149 [2024-11-17 14:59:15.474982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.351 ms 00:22:30.149 [2024-11-17 14:59:15.475009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.149 [2024-11-17 14:59:15.501575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.149 [2024-11-17 14:59:15.501745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:30.149 [2024-11-17 14:59:15.501818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.498 ms 00:22:30.149 [2024-11-17 14:59:15.501842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.149 [2024-11-17 14:59:15.518387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.149 [2024-11-17 14:59:15.518546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:30.149 [2024-11-17 14:59:15.518607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.495 ms 00:22:30.149 [2024-11-17 14:59:15.518631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.410 [2024-11-17 14:59:15.885508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.410 [2024-11-17 14:59:15.885731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:30.410 [2024-11-17 14:59:15.885834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 366.533 ms 00:22:30.410 [2024-11-17 14:59:15.885863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.410 [2024-11-17 14:59:15.912701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.410 [2024-11-17 14:59:15.912901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:30.410 [2024-11-17 14:59:15.913013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.801 ms 00:22:30.410 [2024-11-17 14:59:15.913039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.410 [2024-11-17 14:59:15.938560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.410 [2024-11-17 14:59:15.938737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:30.410 [2024-11-17 14:59:15.938956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.470 ms 00:22:30.410 [2024-11-17 14:59:15.938998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.672 [2024-11-17 14:59:15.963399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.672 [2024-11-17 14:59:15.963573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:30.672 [2024-11-17 14:59:15.964026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.343 ms 00:22:30.672 [2024-11-17 14:59:15.964053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.672 [2024-11-17 14:59:15.988981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.672 [2024-11-17 14:59:15.989148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:30.672 [2024-11-17 14:59:15.989225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.813 ms 00:22:30.672 [2024-11-17 14:59:15.989249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.672 [2024-11-17 14:59:15.989381] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:30.673 [2024-11-17 14:59:15.989452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:22:30.673 [2024-11-17 14:59:15.989607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:30.673 [2024-11-17 14:59:15.990720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:30.674 [2024-11-17 14:59:15.990837] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:30.674 [2024-11-17 14:59:15.990846] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c8ea8922-59e6-447b-9520-64154f0daa7c 00:22:30.674 [2024-11-17 14:59:15.990855] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:22:30.674 [2024-11-17 14:59:15.990863] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 29632 00:22:30.674 [2024-11-17 14:59:15.990872] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 28672 00:22:30.674 [2024-11-17 14:59:15.990882] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0335 00:22:30.674 [2024-11-17 14:59:15.990890] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:30.674 [2024-11-17 14:59:15.990902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:30.674 [2024-11-17 14:59:15.990910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:30.674 [2024-11-17 14:59:15.990935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:30.674 [2024-11-17 14:59:15.990942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:30.674 [2024-11-17 14:59:15.990952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.674 [2024-11-17 14:59:15.990962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:30.674 [2024-11-17 14:59:15.990971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:22:30.674 [2024-11-17 14:59:15.990979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.004619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.674 [2024-11-17 14:59:16.004665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:30.674 [2024-11-17 14:59:16.004677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.584 ms 00:22:30.674 [2024-11-17 14:59:16.004693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.005099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.674 [2024-11-17 14:59:16.005117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:30.674 [2024-11-17 14:59:16.005127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:22:30.674 [2024-11-17 14:59:16.005136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.041536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.674 [2024-11-17 14:59:16.041587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:30.674 [2024-11-17 14:59:16.041605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.674 [2024-11-17 14:59:16.041614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.041688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.674 [2024-11-17 14:59:16.041699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:30.674 [2024-11-17 14:59:16.041709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.674 [2024-11-17 14:59:16.041718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.041786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.674 [2024-11-17 14:59:16.041798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:30.674 [2024-11-17 14:59:16.041808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.674 [2024-11-17 14:59:16.041821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.041838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.674 [2024-11-17 14:59:16.041847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:30.674 [2024-11-17 14:59:16.041856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.674 [2024-11-17 14:59:16.041864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.126670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.674 [2024-11-17 14:59:16.126727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:30.674 [2024-11-17 14:59:16.126747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.674 [2024-11-17 14:59:16.126756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.196747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.674 [2024-11-17 14:59:16.196804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:30.674 [2024-11-17 14:59:16.196815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.674 [2024-11-17 14:59:16.196824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.196907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.674 [2024-11-17 14:59:16.196932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:30.674 [2024-11-17 14:59:16.196943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.674 [2024-11-17 14:59:16.196952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.196995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.674 [2024-11-17 14:59:16.197005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:30.674 [2024-11-17 14:59:16.197014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.674 [2024-11-17 14:59:16.197022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.197125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.674 [2024-11-17 14:59:16.197136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:30.674 [2024-11-17 14:59:16.197146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.674 [2024-11-17 14:59:16.197153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.197187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.674 [2024-11-17 14:59:16.197197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:30.674 [2024-11-17 14:59:16.197205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.674 [2024-11-17 14:59:16.197213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.197255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.674 [2024-11-17 14:59:16.197265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:30.674 [2024-11-17 14:59:16.197274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.674 [2024-11-17 14:59:16.197282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.197334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:30.674 [2024-11-17 14:59:16.197345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:30.674 [2024-11-17 14:59:16.197354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:30.674 [2024-11-17 14:59:16.197362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.674 [2024-11-17 14:59:16.197496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 744.096 ms, result 0 00:22:31.613 00:22:31.613 00:22:31.613 14:59:16 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:34.161 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:34.161 Process with pid 74365 is not found 00:22:34.161 Remove shared memory files 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74365 00:22:34.161 14:59:19 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74365 ']' 00:22:34.161 14:59:19 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74365 00:22:34.161 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74365) - No such process 00:22:34.161 14:59:19 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 74365 is not found' 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:34.161 14:59:19 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:34.161 ************************************ 00:22:34.161 END TEST ftl_restore 00:22:34.161 ************************************ 00:22:34.161 00:22:34.161 real 4m36.026s 00:22:34.161 user 4m22.569s 00:22:34.161 sys 0m13.080s 00:22:34.161 14:59:19 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.161 14:59:19 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:34.161 14:59:19 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:34.161 14:59:19 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:34.161 14:59:19 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.161 14:59:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:34.161 ************************************ 00:22:34.161 START TEST ftl_dirty_shutdown 00:22:34.161 ************************************ 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:34.161 * Looking for test storage... 00:22:34.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:34.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.161 --rc genhtml_branch_coverage=1 00:22:34.161 --rc genhtml_function_coverage=1 00:22:34.161 --rc genhtml_legend=1 00:22:34.161 --rc geninfo_all_blocks=1 00:22:34.161 --rc geninfo_unexecuted_blocks=1 00:22:34.161 00:22:34.161 ' 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:34.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.161 --rc genhtml_branch_coverage=1 00:22:34.161 --rc genhtml_function_coverage=1 00:22:34.161 --rc genhtml_legend=1 00:22:34.161 --rc geninfo_all_blocks=1 00:22:34.161 --rc geninfo_unexecuted_blocks=1 00:22:34.161 00:22:34.161 ' 00:22:34.161 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:34.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.161 --rc genhtml_branch_coverage=1 00:22:34.161 --rc genhtml_function_coverage=1 00:22:34.161 --rc genhtml_legend=1 00:22:34.161 --rc geninfo_all_blocks=1 00:22:34.161 --rc geninfo_unexecuted_blocks=1 00:22:34.161 00:22:34.161 ' 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:34.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.162 --rc genhtml_branch_coverage=1 00:22:34.162 --rc genhtml_function_coverage=1 00:22:34.162 --rc genhtml_legend=1 00:22:34.162 --rc geninfo_all_blocks=1 00:22:34.162 --rc geninfo_unexecuted_blocks=1 00:22:34.162 00:22:34.162 ' 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77283 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77283 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 77283 ']' 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.162 14:59:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:34.162 [2024-11-17 14:59:19.563809] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:34.162 [2024-11-17 14:59:19.564071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77283 ] 00:22:34.423 [2024-11-17 14:59:19.723249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.423 [2024-11-17 14:59:19.825765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.995 14:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.995 14:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:22:34.995 14:59:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:34.995 14:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:34.995 14:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:34.995 14:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:34.995 14:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:34.995 14:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:35.256 14:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:35.256 14:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:35.256 14:59:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:35.257 14:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:35.257 14:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:35.257 14:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:35.257 14:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:35.257 14:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:35.518 14:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:35.518 { 00:22:35.518 "name": "nvme0n1", 00:22:35.518 "aliases": [ 00:22:35.518 "d03c2d9e-5e14-40ef-9d50-579a40321dc7" 00:22:35.518 ], 00:22:35.518 "product_name": "NVMe disk", 00:22:35.518 "block_size": 4096, 00:22:35.518 "num_blocks": 1310720, 00:22:35.518 "uuid": "d03c2d9e-5e14-40ef-9d50-579a40321dc7", 00:22:35.518 "numa_id": -1, 00:22:35.518 "assigned_rate_limits": { 00:22:35.518 "rw_ios_per_sec": 0, 00:22:35.518 "rw_mbytes_per_sec": 0, 00:22:35.518 "r_mbytes_per_sec": 0, 00:22:35.518 "w_mbytes_per_sec": 0 00:22:35.518 }, 00:22:35.518 "claimed": true, 00:22:35.518 "claim_type": "read_many_write_one", 00:22:35.518 "zoned": false, 00:22:35.518 "supported_io_types": { 00:22:35.518 "read": true, 00:22:35.518 "write": true, 00:22:35.518 "unmap": true, 00:22:35.518 "flush": true, 00:22:35.518 "reset": true, 00:22:35.518 "nvme_admin": true, 00:22:35.518 "nvme_io": true, 00:22:35.518 "nvme_io_md": false, 00:22:35.518 "write_zeroes": true, 00:22:35.518 "zcopy": false, 00:22:35.518 "get_zone_info": false, 00:22:35.518 "zone_management": false, 00:22:35.518 "zone_append": false, 00:22:35.518 "compare": true, 00:22:35.518 "compare_and_write": false, 00:22:35.518 "abort": true, 00:22:35.518 "seek_hole": false, 00:22:35.518 "seek_data": false, 00:22:35.518 "copy": true, 00:22:35.518 "nvme_iov_md": false 00:22:35.518 }, 00:22:35.518 "driver_specific": { 00:22:35.518 "nvme": [ 00:22:35.518 { 00:22:35.518 "pci_address": "0000:00:11.0", 00:22:35.518 "trid": { 00:22:35.518 "trtype": "PCIe", 00:22:35.518 "traddr": "0000:00:11.0" 00:22:35.518 }, 00:22:35.518 "ctrlr_data": { 00:22:35.519 "cntlid": 0, 00:22:35.519 "vendor_id": "0x1b36", 00:22:35.519 "model_number": "QEMU NVMe Ctrl", 00:22:35.519 "serial_number": "12341", 00:22:35.519 "firmware_revision": "8.0.0", 00:22:35.519 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:35.519 "oacs": { 00:22:35.519 "security": 0, 00:22:35.519 "format": 1, 00:22:35.519 "firmware": 0, 00:22:35.519 "ns_manage": 1 00:22:35.519 }, 00:22:35.519 "multi_ctrlr": false, 00:22:35.519 "ana_reporting": false 00:22:35.519 }, 00:22:35.519 "vs": { 00:22:35.519 "nvme_version": "1.4" 00:22:35.519 }, 00:22:35.519 "ns_data": { 00:22:35.519 "id": 1, 00:22:35.519 "can_share": false 00:22:35.519 } 00:22:35.519 } 00:22:35.519 ], 00:22:35.519 "mp_policy": "active_passive" 00:22:35.519 } 00:22:35.519 } 00:22:35.519 ]' 00:22:35.519 14:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:35.519 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:35.519 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:35.519 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:35.519 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:35.519 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:22:35.519 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:35.519 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:35.519 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:35.519 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:35.519 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:35.779 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=efd7472e-a6da-49e6-aeae-5788363a0b0d 00:22:35.779 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:35.779 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u efd7472e-a6da-49e6-aeae-5788363a0b0d 00:22:36.040 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:36.301 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=f3226a3c-4699-4e59-a34c-5850eeddb357 00:22:36.301 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f3226a3c-4699-4e59-a34c-5850eeddb357 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=84771480-b2ae-4efa-8445-ba7a61c9a796 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 84771480-b2ae-4efa-8445-ba7a61c9a796 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=84771480-b2ae-4efa-8445-ba7a61c9a796 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 84771480-b2ae-4efa-8445-ba7a61c9a796 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=84771480-b2ae-4efa-8445-ba7a61c9a796 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:36.562 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84771480-b2ae-4efa-8445-ba7a61c9a796 00:22:36.824 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:36.824 { 00:22:36.824 "name": "84771480-b2ae-4efa-8445-ba7a61c9a796", 00:22:36.824 "aliases": [ 00:22:36.824 "lvs/nvme0n1p0" 00:22:36.824 ], 00:22:36.824 "product_name": "Logical Volume", 00:22:36.824 "block_size": 4096, 00:22:36.824 "num_blocks": 26476544, 00:22:36.824 "uuid": "84771480-b2ae-4efa-8445-ba7a61c9a796", 00:22:36.824 "assigned_rate_limits": { 00:22:36.824 "rw_ios_per_sec": 0, 00:22:36.824 "rw_mbytes_per_sec": 0, 00:22:36.824 "r_mbytes_per_sec": 0, 00:22:36.824 "w_mbytes_per_sec": 0 00:22:36.824 }, 00:22:36.824 "claimed": false, 00:22:36.824 "zoned": false, 00:22:36.824 "supported_io_types": { 00:22:36.824 "read": true, 00:22:36.824 "write": true, 00:22:36.824 "unmap": true, 00:22:36.824 "flush": false, 00:22:36.824 "reset": true, 00:22:36.824 "nvme_admin": false, 00:22:36.824 "nvme_io": false, 00:22:36.824 "nvme_io_md": false, 00:22:36.824 "write_zeroes": true, 00:22:36.824 "zcopy": false, 00:22:36.824 "get_zone_info": false, 00:22:36.824 "zone_management": false, 00:22:36.824 "zone_append": false, 00:22:36.824 "compare": false, 00:22:36.824 "compare_and_write": false, 00:22:36.824 "abort": false, 00:22:36.824 "seek_hole": true, 00:22:36.824 "seek_data": true, 00:22:36.824 "copy": false, 00:22:36.824 "nvme_iov_md": false 00:22:36.824 }, 00:22:36.824 "driver_specific": { 00:22:36.824 "lvol": { 00:22:36.824 "lvol_store_uuid": "f3226a3c-4699-4e59-a34c-5850eeddb357", 00:22:36.824 "base_bdev": "nvme0n1", 00:22:36.824 "thin_provision": true, 00:22:36.824 "num_allocated_clusters": 0, 00:22:36.824 "snapshot": false, 00:22:36.824 "clone": false, 00:22:36.824 "esnap_clone": false 00:22:36.824 } 00:22:36.824 } 00:22:36.824 } 00:22:36.824 ]' 00:22:36.824 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:36.824 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:36.824 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:36.824 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:36.824 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:36.824 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:36.824 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:36.824 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:36.824 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:37.085 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:37.085 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:37.085 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 84771480-b2ae-4efa-8445-ba7a61c9a796 00:22:37.085 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=84771480-b2ae-4efa-8445-ba7a61c9a796 00:22:37.085 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:37.085 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:37.085 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:37.085 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84771480-b2ae-4efa-8445-ba7a61c9a796 00:22:37.346 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:37.346 { 00:22:37.346 "name": "84771480-b2ae-4efa-8445-ba7a61c9a796", 00:22:37.346 "aliases": [ 00:22:37.346 "lvs/nvme0n1p0" 00:22:37.346 ], 00:22:37.346 "product_name": "Logical Volume", 00:22:37.346 "block_size": 4096, 00:22:37.346 "num_blocks": 26476544, 00:22:37.346 "uuid": "84771480-b2ae-4efa-8445-ba7a61c9a796", 00:22:37.346 "assigned_rate_limits": { 00:22:37.346 "rw_ios_per_sec": 0, 00:22:37.346 "rw_mbytes_per_sec": 0, 00:22:37.346 "r_mbytes_per_sec": 0, 00:22:37.346 "w_mbytes_per_sec": 0 00:22:37.346 }, 00:22:37.346 "claimed": false, 00:22:37.346 "zoned": false, 00:22:37.346 "supported_io_types": { 00:22:37.346 "read": true, 00:22:37.346 "write": true, 00:22:37.346 "unmap": true, 00:22:37.346 "flush": false, 00:22:37.346 "reset": true, 00:22:37.346 "nvme_admin": false, 00:22:37.346 "nvme_io": false, 00:22:37.346 "nvme_io_md": false, 00:22:37.346 "write_zeroes": true, 00:22:37.346 "zcopy": false, 00:22:37.346 "get_zone_info": false, 00:22:37.346 "zone_management": false, 00:22:37.346 "zone_append": false, 00:22:37.346 "compare": false, 00:22:37.346 "compare_and_write": false, 00:22:37.346 "abort": false, 00:22:37.346 "seek_hole": true, 00:22:37.346 "seek_data": true, 00:22:37.346 "copy": false, 00:22:37.346 "nvme_iov_md": false 00:22:37.346 }, 00:22:37.346 "driver_specific": { 00:22:37.346 "lvol": { 00:22:37.346 "lvol_store_uuid": "f3226a3c-4699-4e59-a34c-5850eeddb357", 00:22:37.346 "base_bdev": "nvme0n1", 00:22:37.346 "thin_provision": true, 00:22:37.346 "num_allocated_clusters": 0, 00:22:37.346 "snapshot": false, 00:22:37.346 "clone": false, 00:22:37.346 "esnap_clone": false 00:22:37.346 } 00:22:37.346 } 00:22:37.346 } 00:22:37.346 ]' 00:22:37.346 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:37.346 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:37.346 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:37.346 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:37.346 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:37.346 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:37.346 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:37.346 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:37.608 14:59:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:37.608 14:59:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 84771480-b2ae-4efa-8445-ba7a61c9a796 00:22:37.608 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=84771480-b2ae-4efa-8445-ba7a61c9a796 00:22:37.608 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:37.608 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:37.608 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:37.608 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84771480-b2ae-4efa-8445-ba7a61c9a796 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:37.870 { 00:22:37.870 "name": "84771480-b2ae-4efa-8445-ba7a61c9a796", 00:22:37.870 "aliases": [ 00:22:37.870 "lvs/nvme0n1p0" 00:22:37.870 ], 00:22:37.870 "product_name": "Logical Volume", 00:22:37.870 "block_size": 4096, 00:22:37.870 "num_blocks": 26476544, 00:22:37.870 "uuid": "84771480-b2ae-4efa-8445-ba7a61c9a796", 00:22:37.870 "assigned_rate_limits": { 00:22:37.870 "rw_ios_per_sec": 0, 00:22:37.870 "rw_mbytes_per_sec": 0, 00:22:37.870 "r_mbytes_per_sec": 0, 00:22:37.870 "w_mbytes_per_sec": 0 00:22:37.870 }, 00:22:37.870 "claimed": false, 00:22:37.870 "zoned": false, 00:22:37.870 "supported_io_types": { 00:22:37.870 "read": true, 00:22:37.870 "write": true, 00:22:37.870 "unmap": true, 00:22:37.870 "flush": false, 00:22:37.870 "reset": true, 00:22:37.870 "nvme_admin": false, 00:22:37.870 "nvme_io": false, 00:22:37.870 "nvme_io_md": false, 00:22:37.870 "write_zeroes": true, 00:22:37.870 "zcopy": false, 00:22:37.870 "get_zone_info": false, 00:22:37.870 "zone_management": false, 00:22:37.870 "zone_append": false, 00:22:37.870 "compare": false, 00:22:37.870 "compare_and_write": false, 00:22:37.870 "abort": false, 00:22:37.870 "seek_hole": true, 00:22:37.870 "seek_data": true, 00:22:37.870 "copy": false, 00:22:37.870 "nvme_iov_md": false 00:22:37.870 }, 00:22:37.870 "driver_specific": { 00:22:37.870 "lvol": { 00:22:37.870 "lvol_store_uuid": "f3226a3c-4699-4e59-a34c-5850eeddb357", 00:22:37.870 "base_bdev": "nvme0n1", 00:22:37.870 "thin_provision": true, 00:22:37.870 "num_allocated_clusters": 0, 00:22:37.870 "snapshot": false, 00:22:37.870 "clone": false, 00:22:37.870 "esnap_clone": false 00:22:37.870 } 00:22:37.870 } 00:22:37.870 } 00:22:37.870 ]' 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 84771480-b2ae-4efa-8445-ba7a61c9a796 --l2p_dram_limit 10' 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:37.870 14:59:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 84771480-b2ae-4efa-8445-ba7a61c9a796 --l2p_dram_limit 10 -c nvc0n1p0 00:22:38.132 [2024-11-17 14:59:23.413891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.132 [2024-11-17 14:59:23.413944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:38.132 [2024-11-17 14:59:23.413959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:38.132 [2024-11-17 14:59:23.413982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.132 [2024-11-17 14:59:23.414026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.132 [2024-11-17 14:59:23.414034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:38.132 [2024-11-17 14:59:23.414042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:38.132 [2024-11-17 14:59:23.414048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.132 [2024-11-17 14:59:23.414068] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:38.132 [2024-11-17 14:59:23.414622] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:38.132 [2024-11-17 14:59:23.414641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.132 [2024-11-17 14:59:23.414647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:38.132 [2024-11-17 14:59:23.414655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:22:38.132 [2024-11-17 14:59:23.414661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.132 [2024-11-17 14:59:23.414714] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID af70b449-0b17-4308-b150-b4a01c1c96c0 00:22:38.132 [2024-11-17 14:59:23.415690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.132 [2024-11-17 14:59:23.415722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:38.132 [2024-11-17 14:59:23.415730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:38.133 [2024-11-17 14:59:23.415737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.133 [2024-11-17 14:59:23.420482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.133 [2024-11-17 14:59:23.420510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:38.133 [2024-11-17 14:59:23.420520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.710 ms 00:22:38.133 [2024-11-17 14:59:23.420528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.133 [2024-11-17 14:59:23.420594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.133 [2024-11-17 14:59:23.420603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:38.133 [2024-11-17 14:59:23.420609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:38.133 [2024-11-17 14:59:23.420619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.133 [2024-11-17 14:59:23.420659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.133 [2024-11-17 14:59:23.420668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:38.133 [2024-11-17 14:59:23.420674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:38.133 [2024-11-17 14:59:23.420683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.133 [2024-11-17 14:59:23.420699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:38.133 [2024-11-17 14:59:23.423572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.133 [2024-11-17 14:59:23.423598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:38.133 [2024-11-17 14:59:23.423609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.876 ms 00:22:38.133 [2024-11-17 14:59:23.423614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.133 [2024-11-17 14:59:23.423641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.133 [2024-11-17 14:59:23.423663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:38.133 [2024-11-17 14:59:23.423671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:38.133 [2024-11-17 14:59:23.423677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.133 [2024-11-17 14:59:23.423691] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:38.133 [2024-11-17 14:59:23.423794] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:38.133 [2024-11-17 14:59:23.423806] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:38.133 [2024-11-17 14:59:23.423814] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:38.133 [2024-11-17 14:59:23.423823] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:38.133 [2024-11-17 14:59:23.423830] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:38.133 [2024-11-17 14:59:23.423838] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:38.133 [2024-11-17 14:59:23.423843] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:38.133 [2024-11-17 14:59:23.423851] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:38.133 [2024-11-17 14:59:23.423856] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:38.133 [2024-11-17 14:59:23.423863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.133 [2024-11-17 14:59:23.423869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:38.133 [2024-11-17 14:59:23.423876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:22:38.133 [2024-11-17 14:59:23.423886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.133 [2024-11-17 14:59:23.423966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.133 [2024-11-17 14:59:23.423974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:38.133 [2024-11-17 14:59:23.423981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:38.133 [2024-11-17 14:59:23.423987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.133 [2024-11-17 14:59:23.424065] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:38.133 [2024-11-17 14:59:23.424072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:38.133 [2024-11-17 14:59:23.424079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:38.133 [2024-11-17 14:59:23.424085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:38.133 [2024-11-17 14:59:23.424097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:38.133 [2024-11-17 14:59:23.424108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:38.133 [2024-11-17 14:59:23.424115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:38.133 [2024-11-17 14:59:23.424126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:38.133 [2024-11-17 14:59:23.424131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:38.133 [2024-11-17 14:59:23.424137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:38.133 [2024-11-17 14:59:23.424142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:38.133 [2024-11-17 14:59:23.424149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:38.133 [2024-11-17 14:59:23.424153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:38.133 [2024-11-17 14:59:23.424167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:38.133 [2024-11-17 14:59:23.424175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:38.133 [2024-11-17 14:59:23.424186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.133 [2024-11-17 14:59:23.424197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:38.133 [2024-11-17 14:59:23.424202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.133 [2024-11-17 14:59:23.424213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:38.133 [2024-11-17 14:59:23.424219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.133 [2024-11-17 14:59:23.424230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:38.133 [2024-11-17 14:59:23.424236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.133 [2024-11-17 14:59:23.424247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:38.133 [2024-11-17 14:59:23.424254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:38.133 [2024-11-17 14:59:23.424265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:38.133 [2024-11-17 14:59:23.424270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:38.133 [2024-11-17 14:59:23.424275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:38.133 [2024-11-17 14:59:23.424280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:38.133 [2024-11-17 14:59:23.424286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:38.133 [2024-11-17 14:59:23.424291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:38.133 [2024-11-17 14:59:23.424302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:38.133 [2024-11-17 14:59:23.424307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424312] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:38.133 [2024-11-17 14:59:23.424319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:38.133 [2024-11-17 14:59:23.424324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:38.133 [2024-11-17 14:59:23.424331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.133 [2024-11-17 14:59:23.424337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:38.133 [2024-11-17 14:59:23.424344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:38.133 [2024-11-17 14:59:23.424350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:38.133 [2024-11-17 14:59:23.424357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:38.133 [2024-11-17 14:59:23.424362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:38.133 [2024-11-17 14:59:23.424368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:38.133 [2024-11-17 14:59:23.424375] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:38.133 [2024-11-17 14:59:23.424384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:38.133 [2024-11-17 14:59:23.424392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:38.133 [2024-11-17 14:59:23.424398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:38.133 [2024-11-17 14:59:23.424403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:38.133 [2024-11-17 14:59:23.424410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:38.133 [2024-11-17 14:59:23.424415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:38.133 [2024-11-17 14:59:23.424421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:38.134 [2024-11-17 14:59:23.424427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:38.134 [2024-11-17 14:59:23.424433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:38.134 [2024-11-17 14:59:23.424439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:38.134 [2024-11-17 14:59:23.424446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:38.134 [2024-11-17 14:59:23.424452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:38.134 [2024-11-17 14:59:23.424458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:38.134 [2024-11-17 14:59:23.424464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:38.134 [2024-11-17 14:59:23.424472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:38.134 [2024-11-17 14:59:23.424477] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:38.134 [2024-11-17 14:59:23.424485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:38.134 [2024-11-17 14:59:23.424491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:38.134 [2024-11-17 14:59:23.424498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:38.134 [2024-11-17 14:59:23.424503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:38.134 [2024-11-17 14:59:23.424509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:38.134 [2024-11-17 14:59:23.424514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.134 [2024-11-17 14:59:23.424522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:38.134 [2024-11-17 14:59:23.424528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:22:38.134 [2024-11-17 14:59:23.424534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.134 [2024-11-17 14:59:23.424573] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:38.134 [2024-11-17 14:59:23.424584] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:42.347 [2024-11-17 14:59:27.060911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.060997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:42.347 [2024-11-17 14:59:27.061016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3636.319 ms 00:22:42.347 [2024-11-17 14:59:27.061028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.093124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.093190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:42.347 [2024-11-17 14:59:27.093205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.844 ms 00:22:42.347 [2024-11-17 14:59:27.093217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.093360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.093374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:42.347 [2024-11-17 14:59:27.093384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:42.347 [2024-11-17 14:59:27.093404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.128893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.128963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:42.347 [2024-11-17 14:59:27.128975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.450 ms 00:22:42.347 [2024-11-17 14:59:27.128987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.129030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.129041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:42.347 [2024-11-17 14:59:27.129050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:42.347 [2024-11-17 14:59:27.129061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.129658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.129693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:42.347 [2024-11-17 14:59:27.129704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:22:42.347 [2024-11-17 14:59:27.129714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.129831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.129845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:42.347 [2024-11-17 14:59:27.129855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:22:42.347 [2024-11-17 14:59:27.129868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.147328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.147378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:42.347 [2024-11-17 14:59:27.147389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.441 ms 00:22:42.347 [2024-11-17 14:59:27.147399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.160700] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:42.347 [2024-11-17 14:59:27.164619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.164816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:42.347 [2024-11-17 14:59:27.164843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.111 ms 00:22:42.347 [2024-11-17 14:59:27.164851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.277443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.277703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:42.347 [2024-11-17 14:59:27.277736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.552 ms 00:22:42.347 [2024-11-17 14:59:27.277746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.277981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.277998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:42.347 [2024-11-17 14:59:27.278013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:22:42.347 [2024-11-17 14:59:27.278021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.305111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.305163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:42.347 [2024-11-17 14:59:27.305181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.025 ms 00:22:42.347 [2024-11-17 14:59:27.305189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.330664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.330714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:42.347 [2024-11-17 14:59:27.330730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.412 ms 00:22:42.347 [2024-11-17 14:59:27.330737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.331388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.331413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:42.347 [2024-11-17 14:59:27.331426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:22:42.347 [2024-11-17 14:59:27.331437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.347 [2024-11-17 14:59:27.416929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.347 [2024-11-17 14:59:27.416983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:42.347 [2024-11-17 14:59:27.417003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.435 ms 00:22:42.347 [2024-11-17 14:59:27.417013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.348 [2024-11-17 14:59:27.445016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.348 [2024-11-17 14:59:27.445069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:42.348 [2024-11-17 14:59:27.445085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.900 ms 00:22:42.348 [2024-11-17 14:59:27.445094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.348 [2024-11-17 14:59:27.471136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.348 [2024-11-17 14:59:27.471340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:42.348 [2024-11-17 14:59:27.471368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.984 ms 00:22:42.348 [2024-11-17 14:59:27.471376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.348 [2024-11-17 14:59:27.497719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.348 [2024-11-17 14:59:27.497771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:42.348 [2024-11-17 14:59:27.497787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.198 ms 00:22:42.348 [2024-11-17 14:59:27.497795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.348 [2024-11-17 14:59:27.497853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.348 [2024-11-17 14:59:27.497863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:42.348 [2024-11-17 14:59:27.497879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:42.348 [2024-11-17 14:59:27.497887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.348 [2024-11-17 14:59:27.498008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.348 [2024-11-17 14:59:27.498023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:42.348 [2024-11-17 14:59:27.498035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:42.348 [2024-11-17 14:59:27.498042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.348 [2024-11-17 14:59:27.499214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4084.770 ms, result 0 00:22:42.348 { 00:22:42.348 "name": "ftl0", 00:22:42.348 "uuid": "af70b449-0b17-4308-b150-b4a01c1c96c0" 00:22:42.348 } 00:22:42.348 14:59:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:42.348 14:59:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:42.348 14:59:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:42.348 14:59:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:42.348 14:59:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:42.609 /dev/nbd0 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:42.609 1+0 records in 00:22:42.609 1+0 records out 00:22:42.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0015659 s, 2.6 MB/s 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:22:42.609 14:59:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:42.609 [2024-11-17 14:59:28.065160] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:42.609 [2024-11-17 14:59:28.065298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77430 ] 00:22:42.871 [2024-11-17 14:59:28.226281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.871 [2024-11-17 14:59:28.346588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.270  [2024-11-17T14:59:30.756Z] Copying: 187/1024 [MB] (187 MBps) [2024-11-17T14:59:31.698Z] Copying: 375/1024 [MB] (187 MBps) [2024-11-17T14:59:32.696Z] Copying: 566/1024 [MB] (191 MBps) [2024-11-17T14:59:33.631Z] Copying: 759/1024 [MB] (192 MBps) [2024-11-17T14:59:33.890Z] Copying: 1000/1024 [MB] (241 MBps) [2024-11-17T14:59:34.459Z] Copying: 1024/1024 [MB] (average 201 MBps) 00:22:48.916 00:22:48.916 14:59:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:51.447 14:59:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:51.447 [2024-11-17 14:59:36.515755] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:51.447 [2024-11-17 14:59:36.515869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77520 ] 00:22:51.447 [2024-11-17 14:59:36.674911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.447 [2024-11-17 14:59:36.782633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.831  [2024-11-17T14:59:39.308Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-17T14:59:40.246Z] Copying: 23/1024 [MB] (11 MBps) [2024-11-17T14:59:41.188Z] Copying: 33/1024 [MB] (10 MBps) [2024-11-17T14:59:42.133Z] Copying: 44180/1048576 [kB] (10156 kBps) [2024-11-17T14:59:43.068Z] Copying: 53544/1048576 [kB] (9364 kBps) [2024-11-17T14:59:44.447Z] Copying: 83/1024 [MB] (31 MBps) [2024-11-17T14:59:45.383Z] Copying: 118/1024 [MB] (34 MBps) [2024-11-17T14:59:46.320Z] Copying: 147/1024 [MB] (29 MBps) [2024-11-17T14:59:47.264Z] Copying: 181/1024 [MB] (34 MBps) [2024-11-17T14:59:48.208Z] Copying: 209/1024 [MB] (27 MBps) [2024-11-17T14:59:49.146Z] Copying: 237/1024 [MB] (27 MBps) [2024-11-17T14:59:50.080Z] Copying: 269/1024 [MB] (31 MBps) [2024-11-17T14:59:51.019Z] Copying: 304/1024 [MB] (34 MBps) [2024-11-17T14:59:52.403Z] Copying: 334/1024 [MB] (30 MBps) [2024-11-17T14:59:53.342Z] Copying: 363/1024 [MB] (28 MBps) [2024-11-17T14:59:54.286Z] Copying: 395/1024 [MB] (32 MBps) [2024-11-17T14:59:55.223Z] Copying: 423/1024 [MB] (28 MBps) [2024-11-17T14:59:56.158Z] Copying: 454/1024 [MB] (31 MBps) [2024-11-17T14:59:57.092Z] Copying: 487/1024 [MB] (32 MBps) [2024-11-17T14:59:58.026Z] Copying: 522/1024 [MB] (34 MBps) [2024-11-17T14:59:59.414Z] Copying: 554/1024 [MB] (32 MBps) [2024-11-17T15:00:00.359Z] Copying: 586/1024 [MB] (31 MBps) [2024-11-17T15:00:01.298Z] Copying: 612/1024 [MB] (26 MBps) [2024-11-17T15:00:02.291Z] Copying: 644/1024 [MB] (32 MBps) [2024-11-17T15:00:03.232Z] Copying: 668/1024 [MB] (23 MBps) [2024-11-17T15:00:04.173Z] Copying: 692/1024 [MB] (24 MBps) [2024-11-17T15:00:05.117Z] Copying: 718/1024 [MB] (25 MBps) [2024-11-17T15:00:06.060Z] Copying: 742/1024 [MB] (24 MBps) [2024-11-17T15:00:07.443Z] Copying: 766/1024 [MB] (23 MBps) [2024-11-17T15:00:08.386Z] Copying: 790/1024 [MB] (23 MBps) [2024-11-17T15:00:09.327Z] Copying: 814/1024 [MB] (24 MBps) [2024-11-17T15:00:10.269Z] Copying: 841/1024 [MB] (26 MBps) [2024-11-17T15:00:11.213Z] Copying: 861/1024 [MB] (19 MBps) [2024-11-17T15:00:12.157Z] Copying: 876/1024 [MB] (15 MBps) [2024-11-17T15:00:13.106Z] Copying: 902/1024 [MB] (26 MBps) [2024-11-17T15:00:14.051Z] Copying: 919/1024 [MB] (16 MBps) [2024-11-17T15:00:15.439Z] Copying: 934/1024 [MB] (15 MBps) [2024-11-17T15:00:16.384Z] Copying: 957/1024 [MB] (23 MBps) [2024-11-17T15:00:17.328Z] Copying: 976/1024 [MB] (18 MBps) [2024-11-17T15:00:18.273Z] Copying: 1002/1024 [MB] (25 MBps) [2024-11-17T15:00:18.534Z] Copying: 1024/1024 [MB] (average 24 MBps) 00:23:32.991 00:23:33.296 15:00:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:33.297 15:00:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:33.297 15:00:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:33.559 [2024-11-17 15:00:18.898991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.559 [2024-11-17 15:00:18.899158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:33.559 [2024-11-17 15:00:18.899230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:33.559 [2024-11-17 15:00:18.899257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.559 [2024-11-17 15:00:18.899302] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:33.559 [2024-11-17 15:00:18.902093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.559 [2024-11-17 15:00:18.902201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:33.559 [2024-11-17 15:00:18.902260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.615 ms 00:23:33.559 [2024-11-17 15:00:18.902283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.559 [2024-11-17 15:00:18.904851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.559 [2024-11-17 15:00:18.904976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:33.559 [2024-11-17 15:00:18.905037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.521 ms 00:23:33.559 [2024-11-17 15:00:18.905061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.559 [2024-11-17 15:00:18.921465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.559 [2024-11-17 15:00:18.921570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:33.559 [2024-11-17 15:00:18.921620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.370 ms 00:23:33.559 [2024-11-17 15:00:18.921642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.559 [2024-11-17 15:00:18.928272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.559 [2024-11-17 15:00:18.928364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:33.559 [2024-11-17 15:00:18.928413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.585 ms 00:23:33.559 [2024-11-17 15:00:18.928434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.559 [2024-11-17 15:00:18.952640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.559 [2024-11-17 15:00:18.952756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:33.559 [2024-11-17 15:00:18.952810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.123 ms 00:23:33.559 [2024-11-17 15:00:18.952832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.559 [2024-11-17 15:00:18.969135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.559 [2024-11-17 15:00:18.969265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:33.559 [2024-11-17 15:00:18.969330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.030 ms 00:23:33.559 [2024-11-17 15:00:18.969354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.559 [2024-11-17 15:00:18.969561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.559 [2024-11-17 15:00:18.969605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:33.559 [2024-11-17 15:00:18.969628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:23:33.559 [2024-11-17 15:00:18.969647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.559 [2024-11-17 15:00:18.993334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.559 [2024-11-17 15:00:18.993451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:33.559 [2024-11-17 15:00:18.993504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.656 ms 00:23:33.560 [2024-11-17 15:00:18.993525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.560 [2024-11-17 15:00:19.017500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.560 [2024-11-17 15:00:19.017622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:33.560 [2024-11-17 15:00:19.017697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.540 ms 00:23:33.560 [2024-11-17 15:00:19.017721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.560 [2024-11-17 15:00:19.040652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.560 [2024-11-17 15:00:19.040777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:33.560 [2024-11-17 15:00:19.040831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.880 ms 00:23:33.560 [2024-11-17 15:00:19.040853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.560 [2024-11-17 15:00:19.064850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.560 [2024-11-17 15:00:19.064979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:33.560 [2024-11-17 15:00:19.065039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.634 ms 00:23:33.560 [2024-11-17 15:00:19.065063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.560 [2024-11-17 15:00:19.065155] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:33.560 [2024-11-17 15:00:19.065204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.065290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.065322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.065353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.065540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.065893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.065997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:33.560 [2024-11-17 15:00:19.066952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.066960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.066968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.066977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.066986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.066994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:33.561 [2024-11-17 15:00:19.067166] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:33.561 [2024-11-17 15:00:19.067175] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af70b449-0b17-4308-b150-b4a01c1c96c0 00:23:33.561 [2024-11-17 15:00:19.067182] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:33.561 [2024-11-17 15:00:19.067193] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:33.561 [2024-11-17 15:00:19.067199] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:33.561 [2024-11-17 15:00:19.067210] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:33.561 [2024-11-17 15:00:19.067217] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:33.561 [2024-11-17 15:00:19.067226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:33.561 [2024-11-17 15:00:19.067233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:33.561 [2024-11-17 15:00:19.067241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:33.561 [2024-11-17 15:00:19.067249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:33.561 [2024-11-17 15:00:19.067259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.561 [2024-11-17 15:00:19.067268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:33.561 [2024-11-17 15:00:19.067279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:23:33.561 [2024-11-17 15:00:19.067286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.561 [2024-11-17 15:00:19.080397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.561 [2024-11-17 15:00:19.080511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:33.561 [2024-11-17 15:00:19.080561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.057 ms 00:23:33.561 [2024-11-17 15:00:19.080584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.561 [2024-11-17 15:00:19.080981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.561 [2024-11-17 15:00:19.081023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:33.561 [2024-11-17 15:00:19.081084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:23:33.561 [2024-11-17 15:00:19.081105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.125072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.822 [2024-11-17 15:00:19.125204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:33.822 [2024-11-17 15:00:19.125259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.822 [2024-11-17 15:00:19.125281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.125358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.822 [2024-11-17 15:00:19.125380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:33.822 [2024-11-17 15:00:19.125401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.822 [2024-11-17 15:00:19.125419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.125509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.822 [2024-11-17 15:00:19.125538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:33.822 [2024-11-17 15:00:19.125561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.822 [2024-11-17 15:00:19.125646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.125687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.822 [2024-11-17 15:00:19.125708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:33.822 [2024-11-17 15:00:19.125729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.822 [2024-11-17 15:00:19.125747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.207150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.822 [2024-11-17 15:00:19.207366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.822 [2024-11-17 15:00:19.207430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.822 [2024-11-17 15:00:19.207453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.276877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.822 [2024-11-17 15:00:19.277106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.822 [2024-11-17 15:00:19.277171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.822 [2024-11-17 15:00:19.277195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.277319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.822 [2024-11-17 15:00:19.277345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.822 [2024-11-17 15:00:19.277372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.822 [2024-11-17 15:00:19.277392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.277458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.822 [2024-11-17 15:00:19.277641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.822 [2024-11-17 15:00:19.277669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.822 [2024-11-17 15:00:19.277689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.277825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.822 [2024-11-17 15:00:19.277851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.822 [2024-11-17 15:00:19.277873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.822 [2024-11-17 15:00:19.278090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.278201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.822 [2024-11-17 15:00:19.278279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:33.822 [2024-11-17 15:00:19.278309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.822 [2024-11-17 15:00:19.278486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.278619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.822 [2024-11-17 15:00:19.278654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.822 [2024-11-17 15:00:19.278742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.822 [2024-11-17 15:00:19.278801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.278881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.822 [2024-11-17 15:00:19.279092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.822 [2024-11-17 15:00:19.279177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.822 [2024-11-17 15:00:19.279200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.822 [2024-11-17 15:00:19.279786] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.731 ms, result 0 00:23:33.822 true 00:23:33.822 15:00:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77283 00:23:33.822 15:00:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77283 00:23:33.822 15:00:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:34.082 [2024-11-17 15:00:19.375150] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:34.083 [2024-11-17 15:00:19.375409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77971 ] 00:23:34.083 [2024-11-17 15:00:19.530675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.343 [2024-11-17 15:00:19.650728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.730  [2024-11-17T15:00:22.215Z] Copying: 231/1024 [MB] (231 MBps) [2024-11-17T15:00:23.157Z] Copying: 490/1024 [MB] (258 MBps) [2024-11-17T15:00:24.098Z] Copying: 752/1024 [MB] (262 MBps) [2024-11-17T15:00:24.099Z] Copying: 1010/1024 [MB] (257 MBps) [2024-11-17T15:00:24.673Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:23:39.130 00:23:39.130 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77283 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:39.130 15:00:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:39.392 [2024-11-17 15:00:24.707218] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:39.392 [2024-11-17 15:00:24.707363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78025 ] 00:23:39.392 [2024-11-17 15:00:24.874204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.653 [2024-11-17 15:00:24.991968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.915 [2024-11-17 15:00:25.281810] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:39.915 [2024-11-17 15:00:25.281889] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:39.915 [2024-11-17 15:00:25.347234] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:39.915 [2024-11-17 15:00:25.348002] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:39.915 [2024-11-17 15:00:25.348505] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:40.489 [2024-11-17 15:00:25.926633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-11-17 15:00:25.926866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:40.489 [2024-11-17 15:00:25.926891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:40.489 [2024-11-17 15:00:25.926900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-11-17 15:00:25.927001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-11-17 15:00:25.927014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.489 [2024-11-17 15:00:25.927023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:40.489 [2024-11-17 15:00:25.927030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-11-17 15:00:25.927051] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:40.489 [2024-11-17 15:00:25.927777] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:40.489 [2024-11-17 15:00:25.927798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-11-17 15:00:25.927808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.489 [2024-11-17 15:00:25.927817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:23:40.489 [2024-11-17 15:00:25.927826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-11-17 15:00:25.929472] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:40.489 [2024-11-17 15:00:25.943847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-11-17 15:00:25.943900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:40.489 [2024-11-17 15:00:25.943914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.376 ms 00:23:40.489 [2024-11-17 15:00:25.943944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-11-17 15:00:25.944022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-11-17 15:00:25.944032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:40.489 [2024-11-17 15:00:25.944042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:40.489 [2024-11-17 15:00:25.944051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-11-17 15:00:25.952284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-11-17 15:00:25.952324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.489 [2024-11-17 15:00:25.952335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.156 ms 00:23:40.489 [2024-11-17 15:00:25.952343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-11-17 15:00:25.952424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-11-17 15:00:25.952432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.489 [2024-11-17 15:00:25.952441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:40.489 [2024-11-17 15:00:25.952450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-11-17 15:00:25.952497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-11-17 15:00:25.952511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:40.489 [2024-11-17 15:00:25.952520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:40.489 [2024-11-17 15:00:25.952527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-11-17 15:00:25.952552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:40.489 [2024-11-17 15:00:25.956664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-11-17 15:00:25.956848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.489 [2024-11-17 15:00:25.956868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.118 ms 00:23:40.489 [2024-11-17 15:00:25.956877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-11-17 15:00:25.956914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-11-17 15:00:25.956948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:40.489 [2024-11-17 15:00:25.956958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:40.489 [2024-11-17 15:00:25.956966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-11-17 15:00:25.957019] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:40.489 [2024-11-17 15:00:25.957050] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:40.489 [2024-11-17 15:00:25.957089] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:40.489 [2024-11-17 15:00:25.957106] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:40.489 [2024-11-17 15:00:25.957213] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:40.489 [2024-11-17 15:00:25.957227] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:40.489 [2024-11-17 15:00:25.957240] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:40.489 [2024-11-17 15:00:25.957251] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:40.489 [2024-11-17 15:00:25.957264] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:40.489 [2024-11-17 15:00:25.957273] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:40.489 [2024-11-17 15:00:25.957281] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:40.489 [2024-11-17 15:00:25.957289] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:40.489 [2024-11-17 15:00:25.957299] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:40.489 [2024-11-17 15:00:25.957308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.489 [2024-11-17 15:00:25.957316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:40.489 [2024-11-17 15:00:25.957323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:23:40.489 [2024-11-17 15:00:25.957330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.489 [2024-11-17 15:00:25.957417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.490 [2024-11-17 15:00:25.957430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:40.490 [2024-11-17 15:00:25.957439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:40.490 [2024-11-17 15:00:25.957446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.490 [2024-11-17 15:00:25.957549] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:40.490 [2024-11-17 15:00:25.957561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:40.490 [2024-11-17 15:00:25.957570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.490 [2024-11-17 15:00:25.957579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:40.490 [2024-11-17 15:00:25.957595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:40.490 [2024-11-17 15:00:25.957611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:40.490 [2024-11-17 15:00:25.957623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.490 [2024-11-17 15:00:25.957639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:40.490 [2024-11-17 15:00:25.957653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:40.490 [2024-11-17 15:00:25.957661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.490 [2024-11-17 15:00:25.957671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:40.490 [2024-11-17 15:00:25.957678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:40.490 [2024-11-17 15:00:25.957685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:40.490 [2024-11-17 15:00:25.957698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:40.490 [2024-11-17 15:00:25.957706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:40.490 [2024-11-17 15:00:25.957721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.490 [2024-11-17 15:00:25.957734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:40.490 [2024-11-17 15:00:25.957741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.490 [2024-11-17 15:00:25.957754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:40.490 [2024-11-17 15:00:25.957760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.490 [2024-11-17 15:00:25.957773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:40.490 [2024-11-17 15:00:25.957782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.490 [2024-11-17 15:00:25.957796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:40.490 [2024-11-17 15:00:25.957803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.490 [2024-11-17 15:00:25.957816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:40.490 [2024-11-17 15:00:25.957822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:40.490 [2024-11-17 15:00:25.957829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.490 [2024-11-17 15:00:25.957835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:40.490 [2024-11-17 15:00:25.957841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:40.490 [2024-11-17 15:00:25.957848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:40.490 [2024-11-17 15:00:25.957863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:40.490 [2024-11-17 15:00:25.957870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957877] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:40.490 [2024-11-17 15:00:25.957885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:40.490 [2024-11-17 15:00:25.957895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.490 [2024-11-17 15:00:25.957905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.490 [2024-11-17 15:00:25.957915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:40.490 [2024-11-17 15:00:25.957939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:40.490 [2024-11-17 15:00:25.957946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:40.490 [2024-11-17 15:00:25.957954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:40.490 [2024-11-17 15:00:25.957961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:40.490 [2024-11-17 15:00:25.957968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:40.490 [2024-11-17 15:00:25.957976] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:40.490 [2024-11-17 15:00:25.957987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.490 [2024-11-17 15:00:25.957995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:40.490 [2024-11-17 15:00:25.958003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:40.490 [2024-11-17 15:00:25.958010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:40.490 [2024-11-17 15:00:25.958018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:40.490 [2024-11-17 15:00:25.958025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:40.490 [2024-11-17 15:00:25.958032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:40.490 [2024-11-17 15:00:25.958041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:40.490 [2024-11-17 15:00:25.958049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:40.490 [2024-11-17 15:00:25.958056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:40.490 [2024-11-17 15:00:25.958063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:40.490 [2024-11-17 15:00:25.958071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:40.490 [2024-11-17 15:00:25.958078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:40.490 [2024-11-17 15:00:25.958085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:40.490 [2024-11-17 15:00:25.958092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:40.490 [2024-11-17 15:00:25.958099] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:40.490 [2024-11-17 15:00:25.958107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.490 [2024-11-17 15:00:25.958118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:40.490 [2024-11-17 15:00:25.958126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:40.490 [2024-11-17 15:00:25.958133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:40.490 [2024-11-17 15:00:25.958140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:40.490 [2024-11-17 15:00:25.958148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.490 [2024-11-17 15:00:25.958155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:40.490 [2024-11-17 15:00:25.958164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:23:40.490 [2024-11-17 15:00:25.958171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.490 [2024-11-17 15:00:25.990369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.491 [2024-11-17 15:00:25.990417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.491 [2024-11-17 15:00:25.990430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.153 ms 00:23:40.491 [2024-11-17 15:00:25.990439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.491 [2024-11-17 15:00:25.990529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.491 [2024-11-17 15:00:25.990542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:40.491 [2024-11-17 15:00:25.990552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:40.491 [2024-11-17 15:00:25.990559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.752 [2024-11-17 15:00:26.039787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.752 [2024-11-17 15:00:26.039840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.752 [2024-11-17 15:00:26.039853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.168 ms 00:23:40.752 [2024-11-17 15:00:26.039866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.752 [2024-11-17 15:00:26.039943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.752 [2024-11-17 15:00:26.039961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.752 [2024-11-17 15:00:26.039971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:40.752 [2024-11-17 15:00:26.039979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.752 [2024-11-17 15:00:26.040585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.752 [2024-11-17 15:00:26.040619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.752 [2024-11-17 15:00:26.040632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:23:40.752 [2024-11-17 15:00:26.040641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.752 [2024-11-17 15:00:26.040800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.752 [2024-11-17 15:00:26.040811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.752 [2024-11-17 15:00:26.040820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:23:40.752 [2024-11-17 15:00:26.040828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.752 [2024-11-17 15:00:26.057075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.752 [2024-11-17 15:00:26.057122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.752 [2024-11-17 15:00:26.057135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.227 ms 00:23:40.752 [2024-11-17 15:00:26.057143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.752 [2024-11-17 15:00:26.071500] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:40.752 [2024-11-17 15:00:26.071548] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:40.752 [2024-11-17 15:00:26.071563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.752 [2024-11-17 15:00:26.071572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:40.752 [2024-11-17 15:00:26.071582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.312 ms 00:23:40.752 [2024-11-17 15:00:26.071590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.098287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.753 [2024-11-17 15:00:26.098487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:40.753 [2024-11-17 15:00:26.098524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.637 ms 00:23:40.753 [2024-11-17 15:00:26.098534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.111670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.753 [2024-11-17 15:00:26.111728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:40.753 [2024-11-17 15:00:26.111741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.004 ms 00:23:40.753 [2024-11-17 15:00:26.111750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.124754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.753 [2024-11-17 15:00:26.124916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:40.753 [2024-11-17 15:00:26.124995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.955 ms 00:23:40.753 [2024-11-17 15:00:26.125022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.125707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.753 [2024-11-17 15:00:26.125766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:40.753 [2024-11-17 15:00:26.125903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:23:40.753 [2024-11-17 15:00:26.125946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.193383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.753 [2024-11-17 15:00:26.193578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:40.753 [2024-11-17 15:00:26.193645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.396 ms 00:23:40.753 [2024-11-17 15:00:26.193670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.205145] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:40.753 [2024-11-17 15:00:26.208632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.753 [2024-11-17 15:00:26.208783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:40.753 [2024-11-17 15:00:26.208839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.905 ms 00:23:40.753 [2024-11-17 15:00:26.208862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.209001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.753 [2024-11-17 15:00:26.209033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:40.753 [2024-11-17 15:00:26.209056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:40.753 [2024-11-17 15:00:26.209131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.209236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.753 [2024-11-17 15:00:26.209265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:40.753 [2024-11-17 15:00:26.209286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:40.753 [2024-11-17 15:00:26.209306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.209387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.753 [2024-11-17 15:00:26.209423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:40.753 [2024-11-17 15:00:26.209590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:40.753 [2024-11-17 15:00:26.209616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.209671] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:40.753 [2024-11-17 15:00:26.209698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.753 [2024-11-17 15:00:26.209719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:40.753 [2024-11-17 15:00:26.209739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:40.753 [2024-11-17 15:00:26.209760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.235956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.753 [2024-11-17 15:00:26.236125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:40.753 [2024-11-17 15:00:26.236185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.155 ms 00:23:40.753 [2024-11-17 15:00:26.236211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.236744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.753 [2024-11-17 15:00:26.236844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:40.753 [2024-11-17 15:00:26.237039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:40.753 [2024-11-17 15:00:26.237069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.753 [2024-11-17 15:00:26.238716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 311.590 ms, result 0 00:23:42.143  [2024-11-17T15:00:28.258Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-17T15:00:29.686Z] Copying: 31/1024 [MB] (12 MBps) [2024-11-17T15:00:30.278Z] Copying: 51/1024 [MB] (20 MBps) [2024-11-17T15:00:31.664Z] Copying: 63/1024 [MB] (11 MBps) [2024-11-17T15:00:32.607Z] Copying: 73/1024 [MB] (10 MBps) [2024-11-17T15:00:33.550Z] Copying: 91/1024 [MB] (18 MBps) [2024-11-17T15:00:34.496Z] Copying: 115/1024 [MB] (23 MBps) [2024-11-17T15:00:35.441Z] Copying: 134/1024 [MB] (19 MBps) [2024-11-17T15:00:36.387Z] Copying: 153/1024 [MB] (19 MBps) [2024-11-17T15:00:37.332Z] Copying: 176/1024 [MB] (22 MBps) [2024-11-17T15:00:38.279Z] Copying: 204/1024 [MB] (28 MBps) [2024-11-17T15:00:39.666Z] Copying: 229/1024 [MB] (25 MBps) [2024-11-17T15:00:40.610Z] Copying: 253/1024 [MB] (24 MBps) [2024-11-17T15:00:41.555Z] Copying: 281/1024 [MB] (27 MBps) [2024-11-17T15:00:42.499Z] Copying: 302/1024 [MB] (21 MBps) [2024-11-17T15:00:43.443Z] Copying: 331/1024 [MB] (29 MBps) [2024-11-17T15:00:44.387Z] Copying: 371/1024 [MB] (39 MBps) [2024-11-17T15:00:45.332Z] Copying: 404/1024 [MB] (33 MBps) [2024-11-17T15:00:46.277Z] Copying: 432/1024 [MB] (27 MBps) [2024-11-17T15:00:47.306Z] Copying: 449/1024 [MB] (16 MBps) [2024-11-17T15:00:48.286Z] Copying: 459/1024 [MB] (10 MBps) [2024-11-17T15:00:49.673Z] Copying: 469/1024 [MB] (10 MBps) [2024-11-17T15:00:50.617Z] Copying: 494/1024 [MB] (24 MBps) [2024-11-17T15:00:51.560Z] Copying: 530/1024 [MB] (36 MBps) [2024-11-17T15:00:52.501Z] Copying: 569/1024 [MB] (39 MBps) [2024-11-17T15:00:53.444Z] Copying: 586/1024 [MB] (16 MBps) [2024-11-17T15:00:54.389Z] Copying: 598/1024 [MB] (12 MBps) [2024-11-17T15:00:55.333Z] Copying: 615/1024 [MB] (16 MBps) [2024-11-17T15:00:56.276Z] Copying: 633/1024 [MB] (18 MBps) [2024-11-17T15:00:57.663Z] Copying: 674/1024 [MB] (40 MBps) [2024-11-17T15:00:58.608Z] Copying: 717/1024 [MB] (42 MBps) [2024-11-17T15:00:59.552Z] Copying: 747/1024 [MB] (29 MBps) [2024-11-17T15:01:00.500Z] Copying: 757/1024 [MB] (10 MBps) [2024-11-17T15:01:01.444Z] Copying: 787/1024 [MB] (29 MBps) [2024-11-17T15:01:02.389Z] Copying: 822/1024 [MB] (35 MBps) [2024-11-17T15:01:03.332Z] Copying: 842/1024 [MB] (19 MBps) [2024-11-17T15:01:04.276Z] Copying: 859/1024 [MB] (17 MBps) [2024-11-17T15:01:05.662Z] Copying: 878/1024 [MB] (18 MBps) [2024-11-17T15:01:06.605Z] Copying: 900/1024 [MB] (22 MBps) [2024-11-17T15:01:07.549Z] Copying: 941/1024 [MB] (40 MBps) [2024-11-17T15:01:08.493Z] Copying: 982/1024 [MB] (41 MBps) [2024-11-17T15:01:09.435Z] Copying: 995/1024 [MB] (12 MBps) [2024-11-17T15:01:10.380Z] Copying: 1009/1024 [MB] (14 MBps) [2024-11-17T15:01:10.952Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-17T15:01:10.952Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-17 15:01:10.817458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.409 [2024-11-17 15:01:10.817543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:25.409 [2024-11-17 15:01:10.817561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:25.409 [2024-11-17 15:01:10.817571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.409 [2024-11-17 15:01:10.819048] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:25.409 [2024-11-17 15:01:10.823493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.409 [2024-11-17 15:01:10.823550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:25.409 [2024-11-17 15:01:10.823565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.378 ms 00:24:25.409 [2024-11-17 15:01:10.823575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.409 [2024-11-17 15:01:10.838509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.409 [2024-11-17 15:01:10.838593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:25.409 [2024-11-17 15:01:10.838610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.933 ms 00:24:25.409 [2024-11-17 15:01:10.838620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.409 [2024-11-17 15:01:10.865807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.409 [2024-11-17 15:01:10.866049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:25.409 [2024-11-17 15:01:10.866074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.167 ms 00:24:25.409 [2024-11-17 15:01:10.866083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.409 [2024-11-17 15:01:10.872242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.409 [2024-11-17 15:01:10.872296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:25.409 [2024-11-17 15:01:10.872308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.117 ms 00:24:25.410 [2024-11-17 15:01:10.872316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.410 [2024-11-17 15:01:10.899697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.410 [2024-11-17 15:01:10.899754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:25.410 [2024-11-17 15:01:10.899768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.334 ms 00:24:25.410 [2024-11-17 15:01:10.899776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.410 [2024-11-17 15:01:10.916239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.410 [2024-11-17 15:01:10.916290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:25.410 [2024-11-17 15:01:10.916304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.412 ms 00:24:25.410 [2024-11-17 15:01:10.916313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.673 [2024-11-17 15:01:11.101124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.673 [2024-11-17 15:01:11.101337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:25.673 [2024-11-17 15:01:11.101361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 184.755 ms 00:24:25.673 [2024-11-17 15:01:11.101379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.673 [2024-11-17 15:01:11.128038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.673 [2024-11-17 15:01:11.128231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:25.673 [2024-11-17 15:01:11.128253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.636 ms 00:24:25.673 [2024-11-17 15:01:11.128261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.673 [2024-11-17 15:01:11.154556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.673 [2024-11-17 15:01:11.154607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:25.673 [2024-11-17 15:01:11.154619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.184 ms 00:24:25.673 [2024-11-17 15:01:11.154627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.673 [2024-11-17 15:01:11.179872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.673 [2024-11-17 15:01:11.179945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:25.673 [2024-11-17 15:01:11.179961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.195 ms 00:24:25.673 [2024-11-17 15:01:11.179968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.673 [2024-11-17 15:01:11.204940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.673 [2024-11-17 15:01:11.204988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:25.673 [2024-11-17 15:01:11.205002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.894 ms 00:24:25.673 [2024-11-17 15:01:11.205009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.673 [2024-11-17 15:01:11.205058] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:25.673 [2024-11-17 15:01:11.205073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107264 / 261120 wr_cnt: 1 state: open 00:24:25.673 [2024-11-17 15:01:11.205085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:25.673 [2024-11-17 15:01:11.205364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:25.674 [2024-11-17 15:01:11.205871] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:25.674 [2024-11-17 15:01:11.205880] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af70b449-0b17-4308-b150-b4a01c1c96c0 00:24:25.674 [2024-11-17 15:01:11.205888] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107264 00:24:25.674 [2024-11-17 15:01:11.205903] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108224 00:24:25.674 [2024-11-17 15:01:11.205932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107264 00:24:25.674 [2024-11-17 15:01:11.205942] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:24:25.674 [2024-11-17 15:01:11.205950] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:25.674 [2024-11-17 15:01:11.205959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:25.674 [2024-11-17 15:01:11.205967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:25.674 [2024-11-17 15:01:11.205974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:25.674 [2024-11-17 15:01:11.205980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:25.674 [2024-11-17 15:01:11.205988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.674 [2024-11-17 15:01:11.205997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:25.674 [2024-11-17 15:01:11.206006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:24:25.674 [2024-11-17 15:01:11.206013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.937 [2024-11-17 15:01:11.219649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.937 [2024-11-17 15:01:11.219698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:25.937 [2024-11-17 15:01:11.219736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.615 ms 00:24:25.937 [2024-11-17 15:01:11.219745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.937 [2024-11-17 15:01:11.220189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.937 [2024-11-17 15:01:11.220206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:25.937 [2024-11-17 15:01:11.220216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:24:25.937 [2024-11-17 15:01:11.220224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.937 [2024-11-17 15:01:11.257315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.937 [2024-11-17 15:01:11.257368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:25.937 [2024-11-17 15:01:11.257380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.937 [2024-11-17 15:01:11.257389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.937 [2024-11-17 15:01:11.257461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.937 [2024-11-17 15:01:11.257470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:25.937 [2024-11-17 15:01:11.257478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.937 [2024-11-17 15:01:11.257487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.937 [2024-11-17 15:01:11.257574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.937 [2024-11-17 15:01:11.257585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:25.937 [2024-11-17 15:01:11.257594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.937 [2024-11-17 15:01:11.257602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.937 [2024-11-17 15:01:11.257619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.937 [2024-11-17 15:01:11.257628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:25.937 [2024-11-17 15:01:11.257637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.937 [2024-11-17 15:01:11.257644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.937 [2024-11-17 15:01:11.342568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.937 [2024-11-17 15:01:11.342630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:25.937 [2024-11-17 15:01:11.342645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.937 [2024-11-17 15:01:11.342653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.937 [2024-11-17 15:01:11.412510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.937 [2024-11-17 15:01:11.412571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.937 [2024-11-17 15:01:11.412585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.937 [2024-11-17 15:01:11.412594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.937 [2024-11-17 15:01:11.412694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.937 [2024-11-17 15:01:11.412705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.938 [2024-11-17 15:01:11.412714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.938 [2024-11-17 15:01:11.412723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.938 [2024-11-17 15:01:11.412761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.938 [2024-11-17 15:01:11.412772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.938 [2024-11-17 15:01:11.412781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.938 [2024-11-17 15:01:11.412789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.938 [2024-11-17 15:01:11.412889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.938 [2024-11-17 15:01:11.412904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.938 [2024-11-17 15:01:11.412914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.938 [2024-11-17 15:01:11.412958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.938 [2024-11-17 15:01:11.412992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.938 [2024-11-17 15:01:11.413003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:25.938 [2024-11-17 15:01:11.413012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.938 [2024-11-17 15:01:11.413020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.938 [2024-11-17 15:01:11.413064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.938 [2024-11-17 15:01:11.413098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.938 [2024-11-17 15:01:11.413107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.938 [2024-11-17 15:01:11.413116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.938 [2024-11-17 15:01:11.413164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.938 [2024-11-17 15:01:11.413176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.938 [2024-11-17 15:01:11.413185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.938 [2024-11-17 15:01:11.413194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.938 [2024-11-17 15:01:11.413332] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 598.679 ms, result 0 00:24:27.325 00:24:27.325 00:24:27.325 15:01:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:29.241 15:01:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:29.501 [2024-11-17 15:01:14.803405] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:29.501 [2024-11-17 15:01:14.803494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78543 ] 00:24:29.501 [2024-11-17 15:01:14.952205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.761 [2024-11-17 15:01:15.066087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.024 [2024-11-17 15:01:15.357901] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:30.024 [2024-11-17 15:01:15.358006] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:30.024 [2024-11-17 15:01:15.520625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.024 [2024-11-17 15:01:15.520688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:30.024 [2024-11-17 15:01:15.520710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:30.024 [2024-11-17 15:01:15.520719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.024 [2024-11-17 15:01:15.520777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.024 [2024-11-17 15:01:15.520788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:30.024 [2024-11-17 15:01:15.520800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:30.024 [2024-11-17 15:01:15.520807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.024 [2024-11-17 15:01:15.520828] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:30.024 [2024-11-17 15:01:15.521559] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:30.024 [2024-11-17 15:01:15.521586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.024 [2024-11-17 15:01:15.521595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:30.024 [2024-11-17 15:01:15.521604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:24:30.024 [2024-11-17 15:01:15.521612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.024 [2024-11-17 15:01:15.523352] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:30.024 [2024-11-17 15:01:15.538166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.024 [2024-11-17 15:01:15.538219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:30.024 [2024-11-17 15:01:15.538232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.816 ms 00:24:30.024 [2024-11-17 15:01:15.538241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.024 [2024-11-17 15:01:15.538335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.024 [2024-11-17 15:01:15.538345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:30.024 [2024-11-17 15:01:15.538355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:30.024 [2024-11-17 15:01:15.538363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.024 [2024-11-17 15:01:15.546788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.024 [2024-11-17 15:01:15.546837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:30.024 [2024-11-17 15:01:15.546847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.342 ms 00:24:30.024 [2024-11-17 15:01:15.546855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.024 [2024-11-17 15:01:15.546968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.024 [2024-11-17 15:01:15.546979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:30.024 [2024-11-17 15:01:15.546988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:24:30.024 [2024-11-17 15:01:15.546996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.024 [2024-11-17 15:01:15.547044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.024 [2024-11-17 15:01:15.547053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:30.024 [2024-11-17 15:01:15.547062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:30.024 [2024-11-17 15:01:15.547070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.024 [2024-11-17 15:01:15.547094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:30.024 [2024-11-17 15:01:15.551140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.024 [2024-11-17 15:01:15.551182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:30.024 [2024-11-17 15:01:15.551193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.051 ms 00:24:30.024 [2024-11-17 15:01:15.551205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.024 [2024-11-17 15:01:15.551242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.024 [2024-11-17 15:01:15.551250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:30.024 [2024-11-17 15:01:15.551259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:30.024 [2024-11-17 15:01:15.551267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.024 [2024-11-17 15:01:15.551321] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:30.024 [2024-11-17 15:01:15.551346] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:30.024 [2024-11-17 15:01:15.551383] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:30.024 [2024-11-17 15:01:15.551402] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:30.024 [2024-11-17 15:01:15.551509] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:30.024 [2024-11-17 15:01:15.551520] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:30.024 [2024-11-17 15:01:15.551531] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:30.024 [2024-11-17 15:01:15.551542] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:30.024 [2024-11-17 15:01:15.551552] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:30.024 [2024-11-17 15:01:15.551561] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:30.024 [2024-11-17 15:01:15.551569] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:30.024 [2024-11-17 15:01:15.551577] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:30.024 [2024-11-17 15:01:15.551585] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:30.024 [2024-11-17 15:01:15.551596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.024 [2024-11-17 15:01:15.551604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:30.024 [2024-11-17 15:01:15.551613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:24:30.024 [2024-11-17 15:01:15.551620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.024 [2024-11-17 15:01:15.551702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.024 [2024-11-17 15:01:15.551726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:30.024 [2024-11-17 15:01:15.551734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:30.024 [2024-11-17 15:01:15.551742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.024 [2024-11-17 15:01:15.551846] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:30.024 [2024-11-17 15:01:15.551859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:30.024 [2024-11-17 15:01:15.551868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:30.024 [2024-11-17 15:01:15.551875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.024 [2024-11-17 15:01:15.551883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:30.024 [2024-11-17 15:01:15.551890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:30.024 [2024-11-17 15:01:15.551897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:30.024 [2024-11-17 15:01:15.551905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:30.024 [2024-11-17 15:01:15.551913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:30.024 [2024-11-17 15:01:15.551936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:30.024 [2024-11-17 15:01:15.551944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:30.024 [2024-11-17 15:01:15.551951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:30.024 [2024-11-17 15:01:15.551959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:30.024 [2024-11-17 15:01:15.551966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:30.024 [2024-11-17 15:01:15.551977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:30.024 [2024-11-17 15:01:15.551991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.024 [2024-11-17 15:01:15.551998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:30.024 [2024-11-17 15:01:15.552006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:30.024 [2024-11-17 15:01:15.552013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.024 [2024-11-17 15:01:15.552021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:30.024 [2024-11-17 15:01:15.552028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:30.025 [2024-11-17 15:01:15.552035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.025 [2024-11-17 15:01:15.552042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:30.025 [2024-11-17 15:01:15.552050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:30.025 [2024-11-17 15:01:15.552056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.025 [2024-11-17 15:01:15.552063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:30.025 [2024-11-17 15:01:15.552071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:30.025 [2024-11-17 15:01:15.552078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.025 [2024-11-17 15:01:15.552084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:30.025 [2024-11-17 15:01:15.552092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:30.025 [2024-11-17 15:01:15.552099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.025 [2024-11-17 15:01:15.552107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:30.025 [2024-11-17 15:01:15.552114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:30.025 [2024-11-17 15:01:15.552120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:30.025 [2024-11-17 15:01:15.552128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:30.025 [2024-11-17 15:01:15.552134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:30.025 [2024-11-17 15:01:15.552141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:30.025 [2024-11-17 15:01:15.552148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:30.025 [2024-11-17 15:01:15.552155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:30.025 [2024-11-17 15:01:15.552161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.025 [2024-11-17 15:01:15.552168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:30.025 [2024-11-17 15:01:15.552175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:30.025 [2024-11-17 15:01:15.552183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.025 [2024-11-17 15:01:15.552189] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:30.025 [2024-11-17 15:01:15.552197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:30.025 [2024-11-17 15:01:15.552205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:30.025 [2024-11-17 15:01:15.552214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.025 [2024-11-17 15:01:15.552222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:30.025 [2024-11-17 15:01:15.552229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:30.025 [2024-11-17 15:01:15.552237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:30.025 [2024-11-17 15:01:15.552244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:30.025 [2024-11-17 15:01:15.552251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:30.025 [2024-11-17 15:01:15.552257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:30.025 [2024-11-17 15:01:15.552267] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:30.025 [2024-11-17 15:01:15.552277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:30.025 [2024-11-17 15:01:15.552286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:30.025 [2024-11-17 15:01:15.552293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:30.025 [2024-11-17 15:01:15.552300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:30.025 [2024-11-17 15:01:15.552307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:30.025 [2024-11-17 15:01:15.552315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:30.025 [2024-11-17 15:01:15.552323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:30.025 [2024-11-17 15:01:15.552330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:30.025 [2024-11-17 15:01:15.552337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:30.025 [2024-11-17 15:01:15.552344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:30.025 [2024-11-17 15:01:15.552351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:30.025 [2024-11-17 15:01:15.552358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:30.025 [2024-11-17 15:01:15.552366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:30.025 [2024-11-17 15:01:15.552373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:30.025 [2024-11-17 15:01:15.552381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:30.025 [2024-11-17 15:01:15.552388] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:30.025 [2024-11-17 15:01:15.552398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:30.025 [2024-11-17 15:01:15.552407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:30.025 [2024-11-17 15:01:15.552415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:30.025 [2024-11-17 15:01:15.552422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:30.025 [2024-11-17 15:01:15.552429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:30.025 [2024-11-17 15:01:15.552437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.025 [2024-11-17 15:01:15.552444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:30.025 [2024-11-17 15:01:15.552452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:24:30.025 [2024-11-17 15:01:15.552467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.286 [2024-11-17 15:01:15.585119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.585174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:30.287 [2024-11-17 15:01:15.585188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.605 ms 00:24:30.287 [2024-11-17 15:01:15.585197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.585295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.585303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:30.287 [2024-11-17 15:01:15.585312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:30.287 [2024-11-17 15:01:15.585320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.631549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.631609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:30.287 [2024-11-17 15:01:15.631623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.169 ms 00:24:30.287 [2024-11-17 15:01:15.631632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.631683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.631693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:30.287 [2024-11-17 15:01:15.631703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:30.287 [2024-11-17 15:01:15.631742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.632419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.632445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:30.287 [2024-11-17 15:01:15.632457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:24:30.287 [2024-11-17 15:01:15.632465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.632622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.632633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:30.287 [2024-11-17 15:01:15.632642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:24:30.287 [2024-11-17 15:01:15.632656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.648506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.648553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:30.287 [2024-11-17 15:01:15.648568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.830 ms 00:24:30.287 [2024-11-17 15:01:15.648576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.663246] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:30.287 [2024-11-17 15:01:15.663462] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:30.287 [2024-11-17 15:01:15.663483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.663493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:30.287 [2024-11-17 15:01:15.663503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.798 ms 00:24:30.287 [2024-11-17 15:01:15.663510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.695592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.695658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:30.287 [2024-11-17 15:01:15.695673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.691 ms 00:24:30.287 [2024-11-17 15:01:15.695681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.709979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.710037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:30.287 [2024-11-17 15:01:15.710049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.764 ms 00:24:30.287 [2024-11-17 15:01:15.710057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.723074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.723122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:30.287 [2024-11-17 15:01:15.723134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.865 ms 00:24:30.287 [2024-11-17 15:01:15.723142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.723856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.723886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:30.287 [2024-11-17 15:01:15.723898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:24:30.287 [2024-11-17 15:01:15.723909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.790531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.790589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:30.287 [2024-11-17 15:01:15.790613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.584 ms 00:24:30.287 [2024-11-17 15:01:15.790622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.802048] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:30.287 [2024-11-17 15:01:15.805064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.805103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:30.287 [2024-11-17 15:01:15.805115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.384 ms 00:24:30.287 [2024-11-17 15:01:15.805125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.805210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.805222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:30.287 [2024-11-17 15:01:15.805232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:30.287 [2024-11-17 15:01:15.805243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.807100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.807145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:30.287 [2024-11-17 15:01:15.807157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.819 ms 00:24:30.287 [2024-11-17 15:01:15.807165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.807195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.807205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:30.287 [2024-11-17 15:01:15.807214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:30.287 [2024-11-17 15:01:15.807222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.287 [2024-11-17 15:01:15.807264] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:30.287 [2024-11-17 15:01:15.807277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.287 [2024-11-17 15:01:15.807286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:30.287 [2024-11-17 15:01:15.807295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:30.287 [2024-11-17 15:01:15.807304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.548 [2024-11-17 15:01:15.833765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.548 [2024-11-17 15:01:15.833817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:30.548 [2024-11-17 15:01:15.833831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.441 ms 00:24:30.548 [2024-11-17 15:01:15.833846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.548 [2024-11-17 15:01:15.833954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.548 [2024-11-17 15:01:15.833966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:30.548 [2024-11-17 15:01:15.833976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:30.548 [2024-11-17 15:01:15.833985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.549 [2024-11-17 15:01:15.835401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 314.248 ms, result 0 00:24:31.492  [2024-11-17T15:01:18.421Z] Copying: 1008/1048576 [kB] (1008 kBps) [2024-11-17T15:01:19.364Z] Copying: 4448/1048576 [kB] (3440 kBps) [2024-11-17T15:01:20.306Z] Copying: 22/1024 [MB] (18 MBps) [2024-11-17T15:01:21.250Z] Copying: 52/1024 [MB] (30 MBps) [2024-11-17T15:01:22.195Z] Copying: 75/1024 [MB] (22 MBps) [2024-11-17T15:01:23.136Z] Copying: 102/1024 [MB] (26 MBps) [2024-11-17T15:01:24.078Z] Copying: 142/1024 [MB] (40 MBps) [2024-11-17T15:01:25.023Z] Copying: 170/1024 [MB] (27 MBps) [2024-11-17T15:01:26.413Z] Copying: 200/1024 [MB] (30 MBps) [2024-11-17T15:01:27.357Z] Copying: 232/1024 [MB] (31 MBps) [2024-11-17T15:01:28.305Z] Copying: 256/1024 [MB] (23 MBps) [2024-11-17T15:01:29.308Z] Copying: 272/1024 [MB] (16 MBps) [2024-11-17T15:01:30.356Z] Copying: 289/1024 [MB] (17 MBps) [2024-11-17T15:01:31.300Z] Copying: 306/1024 [MB] (16 MBps) [2024-11-17T15:01:32.242Z] Copying: 323/1024 [MB] (17 MBps) [2024-11-17T15:01:33.185Z] Copying: 345/1024 [MB] (21 MBps) [2024-11-17T15:01:34.129Z] Copying: 361/1024 [MB] (16 MBps) [2024-11-17T15:01:35.073Z] Copying: 382/1024 [MB] (20 MBps) [2024-11-17T15:01:36.460Z] Copying: 412/1024 [MB] (30 MBps) [2024-11-17T15:01:37.032Z] Copying: 437/1024 [MB] (25 MBps) [2024-11-17T15:01:38.419Z] Copying: 467/1024 [MB] (29 MBps) [2024-11-17T15:01:39.368Z] Copying: 497/1024 [MB] (29 MBps) [2024-11-17T15:01:40.312Z] Copying: 525/1024 [MB] (28 MBps) [2024-11-17T15:01:41.257Z] Copying: 552/1024 [MB] (27 MBps) [2024-11-17T15:01:42.200Z] Copying: 580/1024 [MB] (27 MBps) [2024-11-17T15:01:43.145Z] Copying: 607/1024 [MB] (27 MBps) [2024-11-17T15:01:44.091Z] Copying: 635/1024 [MB] (27 MBps) [2024-11-17T15:01:45.035Z] Copying: 650/1024 [MB] (15 MBps) [2024-11-17T15:01:46.425Z] Copying: 678/1024 [MB] (27 MBps) [2024-11-17T15:01:47.370Z] Copying: 693/1024 [MB] (15 MBps) [2024-11-17T15:01:48.315Z] Copying: 709/1024 [MB] (15 MBps) [2024-11-17T15:01:49.259Z] Copying: 725/1024 [MB] (16 MBps) [2024-11-17T15:01:50.203Z] Copying: 741/1024 [MB] (15 MBps) [2024-11-17T15:01:51.148Z] Copying: 759/1024 [MB] (17 MBps) [2024-11-17T15:01:52.093Z] Copying: 782/1024 [MB] (22 MBps) [2024-11-17T15:01:53.038Z] Copying: 807/1024 [MB] (25 MBps) [2024-11-17T15:01:54.433Z] Copying: 828/1024 [MB] (21 MBps) [2024-11-17T15:01:55.377Z] Copying: 859/1024 [MB] (31 MBps) [2024-11-17T15:01:56.319Z] Copying: 911/1024 [MB] (51 MBps) [2024-11-17T15:01:57.328Z] Copying: 935/1024 [MB] (24 MBps) [2024-11-17T15:01:58.272Z] Copying: 963/1024 [MB] (27 MBps) [2024-11-17T15:01:59.215Z] Copying: 993/1024 [MB] (30 MBps) [2024-11-17T15:01:59.215Z] Copying: 1023/1024 [MB] (29 MBps) [2024-11-17T15:02:01.130Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-17 15:02:00.625816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.587 [2024-11-17 15:02:00.625911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:15.587 [2024-11-17 15:02:00.625950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:15.587 [2024-11-17 15:02:00.625960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.587 [2024-11-17 15:02:00.625989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:15.587 [2024-11-17 15:02:00.629311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.587 [2024-11-17 15:02:00.629362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:15.587 [2024-11-17 15:02:00.629376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.304 ms 00:25:15.587 [2024-11-17 15:02:00.629385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.587 [2024-11-17 15:02:00.629658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.587 [2024-11-17 15:02:00.629674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:15.587 [2024-11-17 15:02:00.629692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:25:15.587 [2024-11-17 15:02:00.629703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.587 [2024-11-17 15:02:00.643398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.587 [2024-11-17 15:02:00.643460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:15.587 [2024-11-17 15:02:00.643473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.671 ms 00:25:15.587 [2024-11-17 15:02:00.643482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.587 [2024-11-17 15:02:00.650337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.587 [2024-11-17 15:02:00.650412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:15.587 [2024-11-17 15:02:00.650424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.816 ms 00:25:15.587 [2024-11-17 15:02:00.650442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.587 [2024-11-17 15:02:00.679591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.587 [2024-11-17 15:02:00.679650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:15.587 [2024-11-17 15:02:00.679666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.002 ms 00:25:15.587 [2024-11-17 15:02:00.679675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.588 [2024-11-17 15:02:00.698284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.588 [2024-11-17 15:02:00.698337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:15.588 [2024-11-17 15:02:00.698351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.552 ms 00:25:15.588 [2024-11-17 15:02:00.698362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.588 [2024-11-17 15:02:00.702156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.588 [2024-11-17 15:02:00.702212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:15.588 [2024-11-17 15:02:00.702225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.734 ms 00:25:15.588 [2024-11-17 15:02:00.702233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.588 [2024-11-17 15:02:00.728750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.588 [2024-11-17 15:02:00.728798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:15.588 [2024-11-17 15:02:00.728810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.491 ms 00:25:15.588 [2024-11-17 15:02:00.728818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.588 [2024-11-17 15:02:00.754845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.588 [2024-11-17 15:02:00.754895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:15.588 [2024-11-17 15:02:00.754936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.975 ms 00:25:15.588 [2024-11-17 15:02:00.754945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.588 [2024-11-17 15:02:00.780394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.588 [2024-11-17 15:02:00.780444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:15.588 [2024-11-17 15:02:00.780457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.398 ms 00:25:15.588 [2024-11-17 15:02:00.780464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.588 [2024-11-17 15:02:00.806111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.588 [2024-11-17 15:02:00.806158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:15.588 [2024-11-17 15:02:00.806171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.568 ms 00:25:15.588 [2024-11-17 15:02:00.806179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.588 [2024-11-17 15:02:00.806227] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:15.588 [2024-11-17 15:02:00.806243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:15.588 [2024-11-17 15:02:00.806254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:15.588 [2024-11-17 15:02:00.806263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:15.588 [2024-11-17 15:02:00.806712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.806999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.807007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.807015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.807023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.807031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:15.589 [2024-11-17 15:02:00.807048] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:15.589 [2024-11-17 15:02:00.807057] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af70b449-0b17-4308-b150-b4a01c1c96c0 00:25:15.589 [2024-11-17 15:02:00.807066] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:15.589 [2024-11-17 15:02:00.807074] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 157376 00:25:15.589 [2024-11-17 15:02:00.807081] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 155392 00:25:15.589 [2024-11-17 15:02:00.807098] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0128 00:25:15.589 [2024-11-17 15:02:00.807106] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:15.589 [2024-11-17 15:02:00.807115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:15.589 [2024-11-17 15:02:00.807123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:15.589 [2024-11-17 15:02:00.807138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:15.589 [2024-11-17 15:02:00.807144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:15.589 [2024-11-17 15:02:00.807152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.589 [2024-11-17 15:02:00.807160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:15.589 [2024-11-17 15:02:00.807168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:25:15.589 [2024-11-17 15:02:00.807176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.589 [2024-11-17 15:02:00.821156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.589 [2024-11-17 15:02:00.821211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:15.589 [2024-11-17 15:02:00.821222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.942 ms 00:25:15.589 [2024-11-17 15:02:00.821230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.589 [2024-11-17 15:02:00.821621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.589 [2024-11-17 15:02:00.821637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:15.589 [2024-11-17 15:02:00.821646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:25:15.589 [2024-11-17 15:02:00.821655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.589 [2024-11-17 15:02:00.858755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.589 [2024-11-17 15:02:00.858810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:15.589 [2024-11-17 15:02:00.858822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.589 [2024-11-17 15:02:00.858830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.589 [2024-11-17 15:02:00.858894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.589 [2024-11-17 15:02:00.858904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:15.589 [2024-11-17 15:02:00.858912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.589 [2024-11-17 15:02:00.858935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.589 [2024-11-17 15:02:00.859021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.589 [2024-11-17 15:02:00.859039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:15.589 [2024-11-17 15:02:00.859047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.589 [2024-11-17 15:02:00.859055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.589 [2024-11-17 15:02:00.859072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.589 [2024-11-17 15:02:00.859080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:15.589 [2024-11-17 15:02:00.859088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.589 [2024-11-17 15:02:00.859096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.589 [2024-11-17 15:02:00.945271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.589 [2024-11-17 15:02:00.945326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:15.589 [2024-11-17 15:02:00.945339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.589 [2024-11-17 15:02:00.945348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.589 [2024-11-17 15:02:01.016149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.589 [2024-11-17 15:02:01.016207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:15.589 [2024-11-17 15:02:01.016220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.589 [2024-11-17 15:02:01.016228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.589 [2024-11-17 15:02:01.016285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.589 [2024-11-17 15:02:01.016296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:15.589 [2024-11-17 15:02:01.016311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.589 [2024-11-17 15:02:01.016319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.589 [2024-11-17 15:02:01.016378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.589 [2024-11-17 15:02:01.016389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:15.589 [2024-11-17 15:02:01.016398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.589 [2024-11-17 15:02:01.016406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.589 [2024-11-17 15:02:01.016509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.589 [2024-11-17 15:02:01.016520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:15.589 [2024-11-17 15:02:01.016529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.589 [2024-11-17 15:02:01.016540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.589 [2024-11-17 15:02:01.016574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.589 [2024-11-17 15:02:01.016584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:15.590 [2024-11-17 15:02:01.016592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.590 [2024-11-17 15:02:01.016600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.590 [2024-11-17 15:02:01.016641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.590 [2024-11-17 15:02:01.016652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:15.590 [2024-11-17 15:02:01.016661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.590 [2024-11-17 15:02:01.016672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.590 [2024-11-17 15:02:01.016721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.590 [2024-11-17 15:02:01.016733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:15.590 [2024-11-17 15:02:01.016742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.590 [2024-11-17 15:02:01.016750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.590 [2024-11-17 15:02:01.016888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.032 ms, result 0 00:25:16.530 00:25:16.530 00:25:16.530 15:02:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:19.078 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:19.079 15:02:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:19.079 [2024-11-17 15:02:04.162764] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:19.079 [2024-11-17 15:02:04.162937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79042 ] 00:25:19.079 [2024-11-17 15:02:04.330399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.079 [2024-11-17 15:02:04.436604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.341 [2024-11-17 15:02:04.729209] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:19.341 [2024-11-17 15:02:04.729288] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:19.605 [2024-11-17 15:02:04.890409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.605 [2024-11-17 15:02:04.890477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:19.605 [2024-11-17 15:02:04.890498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:19.605 [2024-11-17 15:02:04.890507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.605 [2024-11-17 15:02:04.890564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.605 [2024-11-17 15:02:04.890575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:19.605 [2024-11-17 15:02:04.890587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:19.605 [2024-11-17 15:02:04.890596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.605 [2024-11-17 15:02:04.890617] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:19.605 [2024-11-17 15:02:04.891865] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:19.605 [2024-11-17 15:02:04.891961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.605 [2024-11-17 15:02:04.891973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:19.605 [2024-11-17 15:02:04.891983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:25:19.605 [2024-11-17 15:02:04.891991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.605 [2024-11-17 15:02:04.893741] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:19.605 [2024-11-17 15:02:04.908325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.605 [2024-11-17 15:02:04.908388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:19.605 [2024-11-17 15:02:04.908402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.586 ms 00:25:19.605 [2024-11-17 15:02:04.908410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.605 [2024-11-17 15:02:04.908502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.605 [2024-11-17 15:02:04.908512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:19.605 [2024-11-17 15:02:04.908522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:19.605 [2024-11-17 15:02:04.908529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.605 [2024-11-17 15:02:04.916979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.605 [2024-11-17 15:02:04.917022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:19.605 [2024-11-17 15:02:04.917032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.365 ms 00:25:19.605 [2024-11-17 15:02:04.917040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.605 [2024-11-17 15:02:04.917131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.605 [2024-11-17 15:02:04.917141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:19.605 [2024-11-17 15:02:04.917149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:19.605 [2024-11-17 15:02:04.917158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.605 [2024-11-17 15:02:04.917205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.605 [2024-11-17 15:02:04.917215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:19.605 [2024-11-17 15:02:04.917225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:19.605 [2024-11-17 15:02:04.917233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.605 [2024-11-17 15:02:04.917257] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:19.605 [2024-11-17 15:02:04.921266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.605 [2024-11-17 15:02:04.921308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:19.605 [2024-11-17 15:02:04.921319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.015 ms 00:25:19.605 [2024-11-17 15:02:04.921330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.605 [2024-11-17 15:02:04.921368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.605 [2024-11-17 15:02:04.921377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:19.605 [2024-11-17 15:02:04.921385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:19.605 [2024-11-17 15:02:04.921393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.605 [2024-11-17 15:02:04.921449] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:19.605 [2024-11-17 15:02:04.921474] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:19.605 [2024-11-17 15:02:04.921510] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:19.605 [2024-11-17 15:02:04.921530] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:19.605 [2024-11-17 15:02:04.921637] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:19.605 [2024-11-17 15:02:04.921648] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:19.605 [2024-11-17 15:02:04.921660] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:19.605 [2024-11-17 15:02:04.921671] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:19.605 [2024-11-17 15:02:04.921680] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:19.605 [2024-11-17 15:02:04.921688] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:19.605 [2024-11-17 15:02:04.921696] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:19.605 [2024-11-17 15:02:04.921704] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:19.605 [2024-11-17 15:02:04.921712] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:19.605 [2024-11-17 15:02:04.921724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.605 [2024-11-17 15:02:04.921732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:19.605 [2024-11-17 15:02:04.921741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:25:19.605 [2024-11-17 15:02:04.921750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.605 [2024-11-17 15:02:04.921833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.605 [2024-11-17 15:02:04.921841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:19.605 [2024-11-17 15:02:04.921849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:19.605 [2024-11-17 15:02:04.921856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.605 [2024-11-17 15:02:04.921979] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:19.605 [2024-11-17 15:02:04.921994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:19.605 [2024-11-17 15:02:04.922003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:19.605 [2024-11-17 15:02:04.922012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.605 [2024-11-17 15:02:04.922022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:19.605 [2024-11-17 15:02:04.922029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:19.605 [2024-11-17 15:02:04.922036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:19.605 [2024-11-17 15:02:04.922043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:19.605 [2024-11-17 15:02:04.922051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:19.606 [2024-11-17 15:02:04.922058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:19.606 [2024-11-17 15:02:04.922066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:19.606 [2024-11-17 15:02:04.922073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:19.606 [2024-11-17 15:02:04.922079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:19.606 [2024-11-17 15:02:04.922086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:19.606 [2024-11-17 15:02:04.922093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:19.606 [2024-11-17 15:02:04.922108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.606 [2024-11-17 15:02:04.922117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:19.606 [2024-11-17 15:02:04.922125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:19.606 [2024-11-17 15:02:04.922131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.606 [2024-11-17 15:02:04.922139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:19.606 [2024-11-17 15:02:04.922146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:19.606 [2024-11-17 15:02:04.922153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:19.606 [2024-11-17 15:02:04.922160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:19.606 [2024-11-17 15:02:04.922167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:19.606 [2024-11-17 15:02:04.922173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:19.606 [2024-11-17 15:02:04.922179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:19.606 [2024-11-17 15:02:04.922186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:19.606 [2024-11-17 15:02:04.922193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:19.606 [2024-11-17 15:02:04.922200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:19.606 [2024-11-17 15:02:04.922206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:19.606 [2024-11-17 15:02:04.922213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:19.606 [2024-11-17 15:02:04.922219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:19.606 [2024-11-17 15:02:04.922225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:19.606 [2024-11-17 15:02:04.922232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:19.606 [2024-11-17 15:02:04.922238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:19.606 [2024-11-17 15:02:04.922245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:19.606 [2024-11-17 15:02:04.922252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:19.606 [2024-11-17 15:02:04.922259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:19.606 [2024-11-17 15:02:04.922265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:19.606 [2024-11-17 15:02:04.922272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.606 [2024-11-17 15:02:04.922278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:19.606 [2024-11-17 15:02:04.922284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:19.606 [2024-11-17 15:02:04.922291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.606 [2024-11-17 15:02:04.922298] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:19.606 [2024-11-17 15:02:04.922306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:19.606 [2024-11-17 15:02:04.922313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:19.606 [2024-11-17 15:02:04.922321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.606 [2024-11-17 15:02:04.922329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:19.606 [2024-11-17 15:02:04.922338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:19.606 [2024-11-17 15:02:04.922345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:19.606 [2024-11-17 15:02:04.922352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:19.606 [2024-11-17 15:02:04.922358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:19.606 [2024-11-17 15:02:04.922366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:19.606 [2024-11-17 15:02:04.922375] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:19.606 [2024-11-17 15:02:04.922384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:19.606 [2024-11-17 15:02:04.922393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:19.606 [2024-11-17 15:02:04.922400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:19.606 [2024-11-17 15:02:04.922407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:19.606 [2024-11-17 15:02:04.922414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:19.606 [2024-11-17 15:02:04.922421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:19.606 [2024-11-17 15:02:04.922428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:19.606 [2024-11-17 15:02:04.922435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:19.606 [2024-11-17 15:02:04.922442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:19.606 [2024-11-17 15:02:04.922449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:19.606 [2024-11-17 15:02:04.922457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:19.606 [2024-11-17 15:02:04.922464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:19.606 [2024-11-17 15:02:04.922471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:19.606 [2024-11-17 15:02:04.922478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:19.606 [2024-11-17 15:02:04.922486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:19.606 [2024-11-17 15:02:04.922493] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:19.606 [2024-11-17 15:02:04.922504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:19.606 [2024-11-17 15:02:04.922512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:19.606 [2024-11-17 15:02:04.922520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:19.606 [2024-11-17 15:02:04.922528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:19.606 [2024-11-17 15:02:04.922535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:19.606 [2024-11-17 15:02:04.922543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.606 [2024-11-17 15:02:04.922550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:19.606 [2024-11-17 15:02:04.922559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:25:19.606 [2024-11-17 15:02:04.922566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.606 [2024-11-17 15:02:04.954994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.606 [2024-11-17 15:02:04.955047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:19.606 [2024-11-17 15:02:04.955059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.384 ms 00:25:19.606 [2024-11-17 15:02:04.955067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.606 [2024-11-17 15:02:04.955164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.606 [2024-11-17 15:02:04.955173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:19.606 [2024-11-17 15:02:04.955182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:19.606 [2024-11-17 15:02:04.955190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.606 [2024-11-17 15:02:05.000817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.606 [2024-11-17 15:02:05.000877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:19.606 [2024-11-17 15:02:05.000890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.567 ms 00:25:19.606 [2024-11-17 15:02:05.000898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.606 [2024-11-17 15:02:05.000960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.606 [2024-11-17 15:02:05.000971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:19.606 [2024-11-17 15:02:05.000981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:19.606 [2024-11-17 15:02:05.000993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.606 [2024-11-17 15:02:05.001605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.606 [2024-11-17 15:02:05.001655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:19.606 [2024-11-17 15:02:05.001666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:25:19.606 [2024-11-17 15:02:05.001674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.606 [2024-11-17 15:02:05.001829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.606 [2024-11-17 15:02:05.001854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:19.606 [2024-11-17 15:02:05.001864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:25:19.606 [2024-11-17 15:02:05.001879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.606 [2024-11-17 15:02:05.017824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.606 [2024-11-17 15:02:05.017872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:19.607 [2024-11-17 15:02:05.017886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.923 ms 00:25:19.607 [2024-11-17 15:02:05.017894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.607 [2024-11-17 15:02:05.032590] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:19.607 [2024-11-17 15:02:05.032646] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:19.607 [2024-11-17 15:02:05.032660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.607 [2024-11-17 15:02:05.032668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:19.607 [2024-11-17 15:02:05.032679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.643 ms 00:25:19.607 [2024-11-17 15:02:05.032686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.607 [2024-11-17 15:02:05.058603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.607 [2024-11-17 15:02:05.058664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:19.607 [2024-11-17 15:02:05.058677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.859 ms 00:25:19.607 [2024-11-17 15:02:05.058685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.607 [2024-11-17 15:02:05.071868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.607 [2024-11-17 15:02:05.071916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:19.607 [2024-11-17 15:02:05.071940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.122 ms 00:25:19.607 [2024-11-17 15:02:05.071947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.607 [2024-11-17 15:02:05.084641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.607 [2024-11-17 15:02:05.084693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:19.607 [2024-11-17 15:02:05.084706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.644 ms 00:25:19.607 [2024-11-17 15:02:05.084714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.607 [2024-11-17 15:02:05.085382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.607 [2024-11-17 15:02:05.085415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:19.607 [2024-11-17 15:02:05.085425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:25:19.607 [2024-11-17 15:02:05.085436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.870 [2024-11-17 15:02:05.152612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.870 [2024-11-17 15:02:05.152680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:19.870 [2024-11-17 15:02:05.152704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.156 ms 00:25:19.870 [2024-11-17 15:02:05.152713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.870 [2024-11-17 15:02:05.164083] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:19.870 [2024-11-17 15:02:05.167233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.870 [2024-11-17 15:02:05.167280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:19.870 [2024-11-17 15:02:05.167291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.458 ms 00:25:19.870 [2024-11-17 15:02:05.167299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.870 [2024-11-17 15:02:05.167388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.870 [2024-11-17 15:02:05.167399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:19.870 [2024-11-17 15:02:05.167408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:19.870 [2024-11-17 15:02:05.167420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.870 [2024-11-17 15:02:05.168301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.870 [2024-11-17 15:02:05.168355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:19.870 [2024-11-17 15:02:05.168368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:25:19.870 [2024-11-17 15:02:05.168378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.870 [2024-11-17 15:02:05.168409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.870 [2024-11-17 15:02:05.168420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:19.870 [2024-11-17 15:02:05.168430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:19.870 [2024-11-17 15:02:05.168439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.870 [2024-11-17 15:02:05.168484] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:19.870 [2024-11-17 15:02:05.168503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.870 [2024-11-17 15:02:05.168513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:19.870 [2024-11-17 15:02:05.168523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:19.870 [2024-11-17 15:02:05.168532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.870 [2024-11-17 15:02:05.194873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.870 [2024-11-17 15:02:05.194939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:19.870 [2024-11-17 15:02:05.194953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.321 ms 00:25:19.870 [2024-11-17 15:02:05.194967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.870 [2024-11-17 15:02:05.195062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.870 [2024-11-17 15:02:05.195073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:19.870 [2024-11-17 15:02:05.195084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:19.870 [2024-11-17 15:02:05.195093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.870 [2024-11-17 15:02:05.196375] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.448 ms, result 0 00:25:21.278  [2024-11-17T15:02:07.395Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-17T15:02:08.784Z] Copying: 25/1024 [MB] (14 MBps) [2024-11-17T15:02:09.725Z] Copying: 39/1024 [MB] (14 MBps) [2024-11-17T15:02:10.670Z] Copying: 50/1024 [MB] (10 MBps) [2024-11-17T15:02:11.616Z] Copying: 60/1024 [MB] (10 MBps) [2024-11-17T15:02:12.559Z] Copying: 71/1024 [MB] (10 MBps) [2024-11-17T15:02:13.500Z] Copying: 81/1024 [MB] (10 MBps) [2024-11-17T15:02:14.444Z] Copying: 111/1024 [MB] (29 MBps) [2024-11-17T15:02:15.387Z] Copying: 125/1024 [MB] (13 MBps) [2024-11-17T15:02:16.776Z] Copying: 138/1024 [MB] (12 MBps) [2024-11-17T15:02:17.718Z] Copying: 148/1024 [MB] (10 MBps) [2024-11-17T15:02:18.663Z] Copying: 159/1024 [MB] (10 MBps) [2024-11-17T15:02:19.605Z] Copying: 170/1024 [MB] (11 MBps) [2024-11-17T15:02:20.549Z] Copying: 190/1024 [MB] (19 MBps) [2024-11-17T15:02:21.492Z] Copying: 210/1024 [MB] (20 MBps) [2024-11-17T15:02:22.435Z] Copying: 224/1024 [MB] (13 MBps) [2024-11-17T15:02:23.379Z] Copying: 238/1024 [MB] (14 MBps) [2024-11-17T15:02:24.768Z] Copying: 254/1024 [MB] (15 MBps) [2024-11-17T15:02:25.712Z] Copying: 268/1024 [MB] (13 MBps) [2024-11-17T15:02:26.657Z] Copying: 293/1024 [MB] (25 MBps) [2024-11-17T15:02:27.603Z] Copying: 305/1024 [MB] (11 MBps) [2024-11-17T15:02:28.547Z] Copying: 321/1024 [MB] (15 MBps) [2024-11-17T15:02:29.491Z] Copying: 334/1024 [MB] (12 MBps) [2024-11-17T15:02:30.484Z] Copying: 351/1024 [MB] (17 MBps) [2024-11-17T15:02:31.450Z] Copying: 364/1024 [MB] (12 MBps) [2024-11-17T15:02:32.395Z] Copying: 383/1024 [MB] (18 MBps) [2024-11-17T15:02:33.782Z] Copying: 403/1024 [MB] (20 MBps) [2024-11-17T15:02:34.724Z] Copying: 422/1024 [MB] (19 MBps) [2024-11-17T15:02:35.667Z] Copying: 438/1024 [MB] (16 MBps) [2024-11-17T15:02:36.611Z] Copying: 456/1024 [MB] (17 MBps) [2024-11-17T15:02:37.552Z] Copying: 478/1024 [MB] (22 MBps) [2024-11-17T15:02:38.497Z] Copying: 498/1024 [MB] (20 MBps) [2024-11-17T15:02:39.438Z] Copying: 510/1024 [MB] (11 MBps) [2024-11-17T15:02:40.823Z] Copying: 530/1024 [MB] (20 MBps) [2024-11-17T15:02:41.395Z] Copying: 544/1024 [MB] (13 MBps) [2024-11-17T15:02:42.782Z] Copying: 555/1024 [MB] (10 MBps) [2024-11-17T15:02:43.727Z] Copying: 565/1024 [MB] (10 MBps) [2024-11-17T15:02:44.670Z] Copying: 576/1024 [MB] (10 MBps) [2024-11-17T15:02:45.613Z] Copying: 595/1024 [MB] (19 MBps) [2024-11-17T15:02:46.557Z] Copying: 611/1024 [MB] (15 MBps) [2024-11-17T15:02:47.501Z] Copying: 626/1024 [MB] (15 MBps) [2024-11-17T15:02:48.444Z] Copying: 641/1024 [MB] (14 MBps) [2024-11-17T15:02:49.386Z] Copying: 661/1024 [MB] (19 MBps) [2024-11-17T15:02:50.774Z] Copying: 674/1024 [MB] (13 MBps) [2024-11-17T15:02:51.719Z] Copying: 697/1024 [MB] (23 MBps) [2024-11-17T15:02:52.663Z] Copying: 717/1024 [MB] (20 MBps) [2024-11-17T15:02:53.608Z] Copying: 731/1024 [MB] (13 MBps) [2024-11-17T15:02:54.550Z] Copying: 748/1024 [MB] (17 MBps) [2024-11-17T15:02:55.494Z] Copying: 763/1024 [MB] (14 MBps) [2024-11-17T15:02:56.437Z] Copying: 783/1024 [MB] (20 MBps) [2024-11-17T15:02:57.380Z] Copying: 802/1024 [MB] (18 MBps) [2024-11-17T15:02:58.767Z] Copying: 815/1024 [MB] (12 MBps) [2024-11-17T15:02:59.712Z] Copying: 837/1024 [MB] (22 MBps) [2024-11-17T15:03:00.657Z] Copying: 850/1024 [MB] (12 MBps) [2024-11-17T15:03:01.647Z] Copying: 861/1024 [MB] (10 MBps) [2024-11-17T15:03:02.623Z] Copying: 873/1024 [MB] (11 MBps) [2024-11-17T15:03:03.567Z] Copying: 890/1024 [MB] (16 MBps) [2024-11-17T15:03:04.511Z] Copying: 911/1024 [MB] (21 MBps) [2024-11-17T15:03:05.458Z] Copying: 933/1024 [MB] (21 MBps) [2024-11-17T15:03:06.403Z] Copying: 945/1024 [MB] (12 MBps) [2024-11-17T15:03:07.791Z] Copying: 956/1024 [MB] (10 MBps) [2024-11-17T15:03:08.736Z] Copying: 968/1024 [MB] (11 MBps) [2024-11-17T15:03:09.680Z] Copying: 985/1024 [MB] (17 MBps) [2024-11-17T15:03:10.625Z] Copying: 1005/1024 [MB] (19 MBps) [2024-11-17T15:03:10.887Z] Copying: 1016/1024 [MB] (11 MBps) [2024-11-17T15:03:10.887Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-17 15:03:10.832566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.344 [2024-11-17 15:03:10.832645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:25.345 [2024-11-17 15:03:10.832663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:25.345 [2024-11-17 15:03:10.832674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.345 [2024-11-17 15:03:10.832701] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:25.345 [2024-11-17 15:03:10.836384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.345 [2024-11-17 15:03:10.836437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:25.345 [2024-11-17 15:03:10.836461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.664 ms 00:26:25.345 [2024-11-17 15:03:10.836473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.345 [2024-11-17 15:03:10.838590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.345 [2024-11-17 15:03:10.838637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:25.345 [2024-11-17 15:03:10.838652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.085 ms 00:26:25.345 [2024-11-17 15:03:10.838665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.345 [2024-11-17 15:03:10.843945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.345 [2024-11-17 15:03:10.843990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:25.345 [2024-11-17 15:03:10.844004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.260 ms 00:26:25.345 [2024-11-17 15:03:10.844015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.345 [2024-11-17 15:03:10.850809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.345 [2024-11-17 15:03:10.850845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:25.345 [2024-11-17 15:03:10.850855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.766 ms 00:26:25.345 [2024-11-17 15:03:10.850862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.345 [2024-11-17 15:03:10.871182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.345 [2024-11-17 15:03:10.871221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:25.345 [2024-11-17 15:03:10.871231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.267 ms 00:26:25.345 [2024-11-17 15:03:10.871238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.345 [2024-11-17 15:03:10.883278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.345 [2024-11-17 15:03:10.883316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:25.345 [2024-11-17 15:03:10.883327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.005 ms 00:26:25.345 [2024-11-17 15:03:10.883333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.345 [2024-11-17 15:03:10.885368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.345 [2024-11-17 15:03:10.885408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:25.345 [2024-11-17 15:03:10.885416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:26:25.345 [2024-11-17 15:03:10.885422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.607 [2024-11-17 15:03:10.903731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.607 [2024-11-17 15:03:10.903764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:25.607 [2024-11-17 15:03:10.903773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.297 ms 00:26:25.607 [2024-11-17 15:03:10.903787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.607 [2024-11-17 15:03:10.921396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.607 [2024-11-17 15:03:10.921433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:25.608 [2024-11-17 15:03:10.921441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.580 ms 00:26:25.608 [2024-11-17 15:03:10.921447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.608 [2024-11-17 15:03:10.938499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.608 [2024-11-17 15:03:10.938526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:25.608 [2024-11-17 15:03:10.938534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.025 ms 00:26:25.608 [2024-11-17 15:03:10.938539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.608 [2024-11-17 15:03:10.955397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.608 [2024-11-17 15:03:10.955425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:25.608 [2024-11-17 15:03:10.955433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.815 ms 00:26:25.608 [2024-11-17 15:03:10.955438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.608 [2024-11-17 15:03:10.955463] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:25.608 [2024-11-17 15:03:10.955474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:25.608 [2024-11-17 15:03:10.955486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:25.608 [2024-11-17 15:03:10.955492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:25.608 [2024-11-17 15:03:10.955872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.955877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.955883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.955889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.955894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.955900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.955906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.955912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.955917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:25.609 [2024-11-17 15:03:10.956823] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:25.609 [2024-11-17 15:03:10.956833] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af70b449-0b17-4308-b150-b4a01c1c96c0 00:26:25.609 [2024-11-17 15:03:10.956839] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:25.609 [2024-11-17 15:03:10.956845] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:25.609 [2024-11-17 15:03:10.956851] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:25.609 [2024-11-17 15:03:10.956857] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:25.609 [2024-11-17 15:03:10.956862] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:25.609 [2024-11-17 15:03:10.956868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:25.609 [2024-11-17 15:03:10.956879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:25.609 [2024-11-17 15:03:10.956883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:25.609 [2024-11-17 15:03:10.956888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:25.609 [2024-11-17 15:03:10.956895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.609 [2024-11-17 15:03:10.956902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:25.609 [2024-11-17 15:03:10.956908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:26:25.609 [2024-11-17 15:03:10.956914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.609 [2024-11-17 15:03:10.966408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.609 [2024-11-17 15:03:10.966432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:25.609 [2024-11-17 15:03:10.966440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.456 ms 00:26:25.609 [2024-11-17 15:03:10.966446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.609 [2024-11-17 15:03:10.966707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.609 [2024-11-17 15:03:10.966713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:25.609 [2024-11-17 15:03:10.966723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:26:25.609 [2024-11-17 15:03:10.966729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.609 [2024-11-17 15:03:10.992376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.609 [2024-11-17 15:03:10.992403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:25.609 [2024-11-17 15:03:10.992410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.609 [2024-11-17 15:03:10.992416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.609 [2024-11-17 15:03:10.992455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.609 [2024-11-17 15:03:10.992461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:25.609 [2024-11-17 15:03:10.992470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.609 [2024-11-17 15:03:10.992475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.609 [2024-11-17 15:03:10.992527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.609 [2024-11-17 15:03:10.992535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:25.609 [2024-11-17 15:03:10.992541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.609 [2024-11-17 15:03:10.992546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.609 [2024-11-17 15:03:10.992557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.609 [2024-11-17 15:03:10.992563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:25.609 [2024-11-17 15:03:10.992568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.609 [2024-11-17 15:03:10.992576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.609 [2024-11-17 15:03:11.053038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.609 [2024-11-17 15:03:11.053078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:25.609 [2024-11-17 15:03:11.053086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.609 [2024-11-17 15:03:11.053092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.609 [2024-11-17 15:03:11.102014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.609 [2024-11-17 15:03:11.102148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:25.609 [2024-11-17 15:03:11.102160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.609 [2024-11-17 15:03:11.102171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.609 [2024-11-17 15:03:11.102210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.609 [2024-11-17 15:03:11.102217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.609 [2024-11-17 15:03:11.102224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.609 [2024-11-17 15:03:11.102230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.609 [2024-11-17 15:03:11.102270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.609 [2024-11-17 15:03:11.102277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.609 [2024-11-17 15:03:11.102283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.609 [2024-11-17 15:03:11.102289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.609 [2024-11-17 15:03:11.102363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.609 [2024-11-17 15:03:11.102371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.609 [2024-11-17 15:03:11.102377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.610 [2024-11-17 15:03:11.102383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.610 [2024-11-17 15:03:11.102405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.610 [2024-11-17 15:03:11.102412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:25.610 [2024-11-17 15:03:11.102418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.610 [2024-11-17 15:03:11.102423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.610 [2024-11-17 15:03:11.102453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.610 [2024-11-17 15:03:11.102460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.610 [2024-11-17 15:03:11.102466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.610 [2024-11-17 15:03:11.102471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.610 [2024-11-17 15:03:11.102503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.610 [2024-11-17 15:03:11.102510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.610 [2024-11-17 15:03:11.102516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.610 [2024-11-17 15:03:11.102522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.610 [2024-11-17 15:03:11.102611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.036 ms, result 0 00:26:26.182 00:26:26.182 00:26:26.182 15:03:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:28.730 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:28.730 15:03:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:28.730 15:03:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:28.730 15:03:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:28.730 15:03:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:28.730 15:03:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:28.731 15:03:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:28.731 15:03:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:28.731 15:03:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77283 00:26:28.731 15:03:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 77283 ']' 00:26:28.731 Process with pid 77283 is not found 00:26:28.731 15:03:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 77283 00:26:28.731 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77283) - No such process 00:26:28.731 15:03:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 77283 is not found' 00:26:28.731 15:03:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:28.992 Remove shared memory files 00:26:28.992 15:03:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:28.992 15:03:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:28.992 15:03:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:28.992 15:03:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:28.992 15:03:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:28.992 15:03:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:28.992 15:03:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:28.992 ************************************ 00:26:28.992 END TEST ftl_dirty_shutdown 00:26:28.992 ************************************ 00:26:28.992 00:26:28.992 real 3m55.104s 00:26:28.992 user 4m15.282s 00:26:28.992 sys 0m25.702s 00:26:28.992 15:03:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.992 15:03:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:28.992 15:03:14 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:28.992 15:03:14 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:28.992 15:03:14 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.992 15:03:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:28.992 ************************************ 00:26:28.993 START TEST ftl_upgrade_shutdown 00:26:28.993 ************************************ 00:26:28.993 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:29.255 * Looking for test storage... 00:26:29.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:29.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.255 --rc genhtml_branch_coverage=1 00:26:29.255 --rc genhtml_function_coverage=1 00:26:29.255 --rc genhtml_legend=1 00:26:29.255 --rc geninfo_all_blocks=1 00:26:29.255 --rc geninfo_unexecuted_blocks=1 00:26:29.255 00:26:29.255 ' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:29.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.255 --rc genhtml_branch_coverage=1 00:26:29.255 --rc genhtml_function_coverage=1 00:26:29.255 --rc genhtml_legend=1 00:26:29.255 --rc geninfo_all_blocks=1 00:26:29.255 --rc geninfo_unexecuted_blocks=1 00:26:29.255 00:26:29.255 ' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:29.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.255 --rc genhtml_branch_coverage=1 00:26:29.255 --rc genhtml_function_coverage=1 00:26:29.255 --rc genhtml_legend=1 00:26:29.255 --rc geninfo_all_blocks=1 00:26:29.255 --rc geninfo_unexecuted_blocks=1 00:26:29.255 00:26:29.255 ' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:29.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.255 --rc genhtml_branch_coverage=1 00:26:29.255 --rc genhtml_function_coverage=1 00:26:29.255 --rc genhtml_legend=1 00:26:29.255 --rc geninfo_all_blocks=1 00:26:29.255 --rc geninfo_unexecuted_blocks=1 00:26:29.255 00:26:29.255 ' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79835 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79835 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79835 ']' 00:26:29.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.255 15:03:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:29.255 [2024-11-17 15:03:14.774259] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:29.255 [2024-11-17 15:03:14.774533] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79835 ] 00:26:29.517 [2024-11-17 15:03:14.936240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.517 [2024-11-17 15:03:15.049181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:30.461 15:03:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:30.722 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:30.722 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:30.722 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:30.722 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:26:30.722 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:30.722 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:30.722 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:30.722 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:30.722 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:30.722 { 00:26:30.722 "name": "basen1", 00:26:30.722 "aliases": [ 00:26:30.722 "646b3273-f6c4-4d16-ba47-9d7689c6306b" 00:26:30.722 ], 00:26:30.722 "product_name": "NVMe disk", 00:26:30.722 "block_size": 4096, 00:26:30.722 "num_blocks": 1310720, 00:26:30.722 "uuid": "646b3273-f6c4-4d16-ba47-9d7689c6306b", 00:26:30.722 "numa_id": -1, 00:26:30.722 "assigned_rate_limits": { 00:26:30.722 "rw_ios_per_sec": 0, 00:26:30.722 "rw_mbytes_per_sec": 0, 00:26:30.722 "r_mbytes_per_sec": 0, 00:26:30.722 "w_mbytes_per_sec": 0 00:26:30.722 }, 00:26:30.722 "claimed": true, 00:26:30.722 "claim_type": "read_many_write_one", 00:26:30.722 "zoned": false, 00:26:30.722 "supported_io_types": { 00:26:30.722 "read": true, 00:26:30.722 "write": true, 00:26:30.723 "unmap": true, 00:26:30.723 "flush": true, 00:26:30.723 "reset": true, 00:26:30.723 "nvme_admin": true, 00:26:30.723 "nvme_io": true, 00:26:30.723 "nvme_io_md": false, 00:26:30.723 "write_zeroes": true, 00:26:30.723 "zcopy": false, 00:26:30.723 "get_zone_info": false, 00:26:30.723 "zone_management": false, 00:26:30.723 "zone_append": false, 00:26:30.723 "compare": true, 00:26:30.723 "compare_and_write": false, 00:26:30.723 "abort": true, 00:26:30.723 "seek_hole": false, 00:26:30.723 "seek_data": false, 00:26:30.723 "copy": true, 00:26:30.723 "nvme_iov_md": false 00:26:30.723 }, 00:26:30.723 "driver_specific": { 00:26:30.723 "nvme": [ 00:26:30.723 { 00:26:30.723 "pci_address": "0000:00:11.0", 00:26:30.723 "trid": { 00:26:30.723 "trtype": "PCIe", 00:26:30.723 "traddr": "0000:00:11.0" 00:26:30.723 }, 00:26:30.723 "ctrlr_data": { 00:26:30.723 "cntlid": 0, 00:26:30.723 "vendor_id": "0x1b36", 00:26:30.723 "model_number": "QEMU NVMe Ctrl", 00:26:30.723 "serial_number": "12341", 00:26:30.723 "firmware_revision": "8.0.0", 00:26:30.723 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:30.723 "oacs": { 00:26:30.723 "security": 0, 00:26:30.723 "format": 1, 00:26:30.723 "firmware": 0, 00:26:30.723 "ns_manage": 1 00:26:30.723 }, 00:26:30.723 "multi_ctrlr": false, 00:26:30.723 "ana_reporting": false 00:26:30.723 }, 00:26:30.723 "vs": { 00:26:30.723 "nvme_version": "1.4" 00:26:30.723 }, 00:26:30.723 "ns_data": { 00:26:30.723 "id": 1, 00:26:30.723 "can_share": false 00:26:30.723 } 00:26:30.723 } 00:26:30.723 ], 00:26:30.723 "mp_policy": "active_passive" 00:26:30.723 } 00:26:30.723 } 00:26:30.723 ]' 00:26:30.984 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:30.984 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:30.984 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:30.984 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:30.984 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:30.984 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:30.984 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:30.984 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:30.984 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:30.984 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:30.984 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:31.244 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=f3226a3c-4699-4e59-a34c-5850eeddb357 00:26:31.244 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:31.244 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f3226a3c-4699-4e59-a34c-5850eeddb357 00:26:31.505 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:31.505 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=dd8f1b41-b803-4a43-97c5-5d08df6642e6 00:26:31.505 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u dd8f1b41-b803-4a43-97c5-5d08df6642e6 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=a8d26029-fb3d-4214-b60f-820418c6739d 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z a8d26029-fb3d-4214-b60f-820418c6739d ]] 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 a8d26029-fb3d-4214-b60f-820418c6739d 5120 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=a8d26029-fb3d-4214-b60f-820418c6739d 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size a8d26029-fb3d-4214-b60f-820418c6739d 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a8d26029-fb3d-4214-b60f-820418c6739d 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:31.767 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a8d26029-fb3d-4214-b60f-820418c6739d 00:26:32.028 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:32.028 { 00:26:32.028 "name": "a8d26029-fb3d-4214-b60f-820418c6739d", 00:26:32.028 "aliases": [ 00:26:32.028 "lvs/basen1p0" 00:26:32.028 ], 00:26:32.028 "product_name": "Logical Volume", 00:26:32.028 "block_size": 4096, 00:26:32.028 "num_blocks": 5242880, 00:26:32.028 "uuid": "a8d26029-fb3d-4214-b60f-820418c6739d", 00:26:32.028 "assigned_rate_limits": { 00:26:32.028 "rw_ios_per_sec": 0, 00:26:32.028 "rw_mbytes_per_sec": 0, 00:26:32.028 "r_mbytes_per_sec": 0, 00:26:32.028 "w_mbytes_per_sec": 0 00:26:32.028 }, 00:26:32.028 "claimed": false, 00:26:32.028 "zoned": false, 00:26:32.028 "supported_io_types": { 00:26:32.028 "read": true, 00:26:32.028 "write": true, 00:26:32.028 "unmap": true, 00:26:32.028 "flush": false, 00:26:32.028 "reset": true, 00:26:32.028 "nvme_admin": false, 00:26:32.028 "nvme_io": false, 00:26:32.028 "nvme_io_md": false, 00:26:32.028 "write_zeroes": true, 00:26:32.028 "zcopy": false, 00:26:32.028 "get_zone_info": false, 00:26:32.028 "zone_management": false, 00:26:32.028 "zone_append": false, 00:26:32.028 "compare": false, 00:26:32.028 "compare_and_write": false, 00:26:32.028 "abort": false, 00:26:32.028 "seek_hole": true, 00:26:32.028 "seek_data": true, 00:26:32.028 "copy": false, 00:26:32.028 "nvme_iov_md": false 00:26:32.028 }, 00:26:32.028 "driver_specific": { 00:26:32.028 "lvol": { 00:26:32.028 "lvol_store_uuid": "dd8f1b41-b803-4a43-97c5-5d08df6642e6", 00:26:32.028 "base_bdev": "basen1", 00:26:32.028 "thin_provision": true, 00:26:32.028 "num_allocated_clusters": 0, 00:26:32.028 "snapshot": false, 00:26:32.028 "clone": false, 00:26:32.028 "esnap_clone": false 00:26:32.028 } 00:26:32.028 } 00:26:32.028 } 00:26:32.028 ]' 00:26:32.028 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:32.028 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:32.028 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:32.028 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:26:32.028 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:26:32.028 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:26:32.028 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:32.028 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:32.028 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:32.290 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:32.290 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:32.290 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:32.551 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:32.551 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:32.551 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a8d26029-fb3d-4214-b60f-820418c6739d -c cachen1p0 --l2p_dram_limit 2 00:26:32.814 [2024-11-17 15:03:18.190886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.814 [2024-11-17 15:03:18.190938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:32.814 [2024-11-17 15:03:18.190950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:32.814 [2024-11-17 15:03:18.190970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.814 [2024-11-17 15:03:18.191020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.814 [2024-11-17 15:03:18.191028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:32.814 [2024-11-17 15:03:18.191036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:32.814 [2024-11-17 15:03:18.191042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.814 [2024-11-17 15:03:18.191058] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:32.814 [2024-11-17 15:03:18.191599] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:32.814 [2024-11-17 15:03:18.191619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.814 [2024-11-17 15:03:18.191625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:32.814 [2024-11-17 15:03:18.191633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.562 ms 00:26:32.814 [2024-11-17 15:03:18.191639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.814 [2024-11-17 15:03:18.191692] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 306cf6e8-507b-468a-b3be-5907850a0378 00:26:32.814 [2024-11-17 15:03:18.192665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.814 [2024-11-17 15:03:18.192688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:32.814 [2024-11-17 15:03:18.192696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:32.814 [2024-11-17 15:03:18.192703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.814 [2024-11-17 15:03:18.197421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.814 [2024-11-17 15:03:18.197451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:32.814 [2024-11-17 15:03:18.197460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.684 ms 00:26:32.814 [2024-11-17 15:03:18.197468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.814 [2024-11-17 15:03:18.197497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.814 [2024-11-17 15:03:18.197505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:32.814 [2024-11-17 15:03:18.197511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:32.814 [2024-11-17 15:03:18.197520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.814 [2024-11-17 15:03:18.197555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.814 [2024-11-17 15:03:18.197564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:32.814 [2024-11-17 15:03:18.197571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:32.814 [2024-11-17 15:03:18.197581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.814 [2024-11-17 15:03:18.197597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:32.814 [2024-11-17 15:03:18.200493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.814 [2024-11-17 15:03:18.200518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:32.814 [2024-11-17 15:03:18.200528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.898 ms 00:26:32.814 [2024-11-17 15:03:18.200534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.814 [2024-11-17 15:03:18.200554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.814 [2024-11-17 15:03:18.200561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:32.814 [2024-11-17 15:03:18.200568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:32.814 [2024-11-17 15:03:18.200574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.814 [2024-11-17 15:03:18.200594] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:32.814 [2024-11-17 15:03:18.200699] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:32.814 [2024-11-17 15:03:18.200711] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:32.814 [2024-11-17 15:03:18.200720] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:32.814 [2024-11-17 15:03:18.200729] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:32.814 [2024-11-17 15:03:18.200736] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:32.814 [2024-11-17 15:03:18.200743] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:32.814 [2024-11-17 15:03:18.200749] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:32.814 [2024-11-17 15:03:18.200758] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:32.814 [2024-11-17 15:03:18.200763] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:32.814 [2024-11-17 15:03:18.200770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.814 [2024-11-17 15:03:18.200775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:32.814 [2024-11-17 15:03:18.200782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:26:32.814 [2024-11-17 15:03:18.200787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.814 [2024-11-17 15:03:18.200852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.814 [2024-11-17 15:03:18.200858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:32.814 [2024-11-17 15:03:18.200866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:26:32.814 [2024-11-17 15:03:18.200876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.814 [2024-11-17 15:03:18.200970] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:32.814 [2024-11-17 15:03:18.200978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:32.814 [2024-11-17 15:03:18.200986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:32.814 [2024-11-17 15:03:18.200992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.814 [2024-11-17 15:03:18.201000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:32.814 [2024-11-17 15:03:18.201005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:32.814 [2024-11-17 15:03:18.201011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:32.814 [2024-11-17 15:03:18.201016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:32.814 [2024-11-17 15:03:18.201022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:32.814 [2024-11-17 15:03:18.201028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.814 [2024-11-17 15:03:18.201035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:32.814 [2024-11-17 15:03:18.201040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:32.814 [2024-11-17 15:03:18.201046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.814 [2024-11-17 15:03:18.201051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:32.814 [2024-11-17 15:03:18.201059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:32.814 [2024-11-17 15:03:18.201064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.814 [2024-11-17 15:03:18.201072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:32.814 [2024-11-17 15:03:18.201077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:32.814 [2024-11-17 15:03:18.201084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.814 [2024-11-17 15:03:18.201089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:32.814 [2024-11-17 15:03:18.201096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:32.814 [2024-11-17 15:03:18.201101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:32.814 [2024-11-17 15:03:18.201107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:32.814 [2024-11-17 15:03:18.201112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:32.814 [2024-11-17 15:03:18.201119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:32.814 [2024-11-17 15:03:18.201124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:32.814 [2024-11-17 15:03:18.201130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:32.814 [2024-11-17 15:03:18.201135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:32.814 [2024-11-17 15:03:18.201141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:32.814 [2024-11-17 15:03:18.201146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:32.814 [2024-11-17 15:03:18.201152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:32.814 [2024-11-17 15:03:18.201157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:32.814 [2024-11-17 15:03:18.201165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:32.814 [2024-11-17 15:03:18.201170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.815 [2024-11-17 15:03:18.201176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:32.815 [2024-11-17 15:03:18.201181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:32.815 [2024-11-17 15:03:18.201188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.815 [2024-11-17 15:03:18.201192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:32.815 [2024-11-17 15:03:18.201199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:32.815 [2024-11-17 15:03:18.201203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.815 [2024-11-17 15:03:18.201210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:32.815 [2024-11-17 15:03:18.201215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:32.815 [2024-11-17 15:03:18.201221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.815 [2024-11-17 15:03:18.201226] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:32.815 [2024-11-17 15:03:18.201233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:32.815 [2024-11-17 15:03:18.201238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:32.815 [2024-11-17 15:03:18.201247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:32.815 [2024-11-17 15:03:18.201253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:32.815 [2024-11-17 15:03:18.201265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:32.815 [2024-11-17 15:03:18.201270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:32.815 [2024-11-17 15:03:18.201276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:32.815 [2024-11-17 15:03:18.201281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:32.815 [2024-11-17 15:03:18.201288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:32.815 [2024-11-17 15:03:18.201296] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:32.815 [2024-11-17 15:03:18.201304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:32.815 [2024-11-17 15:03:18.201312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:32.815 [2024-11-17 15:03:18.201319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:32.815 [2024-11-17 15:03:18.201324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:32.815 [2024-11-17 15:03:18.201331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:32.815 [2024-11-17 15:03:18.201337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:32.815 [2024-11-17 15:03:18.201343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:32.815 [2024-11-17 15:03:18.201349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:32.815 [2024-11-17 15:03:18.201355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:32.815 [2024-11-17 15:03:18.201361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:32.815 [2024-11-17 15:03:18.201369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:32.815 [2024-11-17 15:03:18.201374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:32.815 [2024-11-17 15:03:18.201380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:32.815 [2024-11-17 15:03:18.201386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:32.815 [2024-11-17 15:03:18.201394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:32.815 [2024-11-17 15:03:18.201399] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:32.815 [2024-11-17 15:03:18.201407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:32.815 [2024-11-17 15:03:18.201413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:32.815 [2024-11-17 15:03:18.201420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:32.815 [2024-11-17 15:03:18.201426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:32.815 [2024-11-17 15:03:18.201433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:32.815 [2024-11-17 15:03:18.201438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.815 [2024-11-17 15:03:18.201445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:32.815 [2024-11-17 15:03:18.201451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.537 ms 00:26:32.815 [2024-11-17 15:03:18.201458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.815 [2024-11-17 15:03:18.201486] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:32.815 [2024-11-17 15:03:18.201495] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:37.024 [2024-11-17 15:03:21.958957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.024 [2024-11-17 15:03:21.959045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:37.024 [2024-11-17 15:03:21.959064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3757.451 ms 00:26:37.024 [2024-11-17 15:03:21.959076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.024 [2024-11-17 15:03:21.990358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.024 [2024-11-17 15:03:21.990422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:37.024 [2024-11-17 15:03:21.990437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.037 ms 00:26:37.024 [2024-11-17 15:03:21.990448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.024 [2024-11-17 15:03:21.990535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.024 [2024-11-17 15:03:21.990549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:37.024 [2024-11-17 15:03:21.990558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:37.024 [2024-11-17 15:03:21.990573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.024 [2024-11-17 15:03:22.025817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.024 [2024-11-17 15:03:22.025866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:37.024 [2024-11-17 15:03:22.025878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.205 ms 00:26:37.024 [2024-11-17 15:03:22.025889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.024 [2024-11-17 15:03:22.025937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.024 [2024-11-17 15:03:22.025954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:37.024 [2024-11-17 15:03:22.025963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:37.024 [2024-11-17 15:03:22.025973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.024 [2024-11-17 15:03:22.026565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.024 [2024-11-17 15:03:22.026600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:37.024 [2024-11-17 15:03:22.026611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.524 ms 00:26:37.024 [2024-11-17 15:03:22.026622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.024 [2024-11-17 15:03:22.026676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.024 [2024-11-17 15:03:22.026688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:37.025 [2024-11-17 15:03:22.026700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:37.025 [2024-11-17 15:03:22.026713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.043972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.044175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:37.025 [2024-11-17 15:03:22.044194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.239 ms 00:26:37.025 [2024-11-17 15:03:22.044205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.057232] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:37.025 [2024-11-17 15:03:22.058529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.058569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:37.025 [2024-11-17 15:03:22.058583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.231 ms 00:26:37.025 [2024-11-17 15:03:22.058592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.097866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.097937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:37.025 [2024-11-17 15:03:22.097957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.239 ms 00:26:37.025 [2024-11-17 15:03:22.097966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.098074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.098111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:37.025 [2024-11-17 15:03:22.098128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:26:37.025 [2024-11-17 15:03:22.098137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.123190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.123238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:37.025 [2024-11-17 15:03:22.123254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.994 ms 00:26:37.025 [2024-11-17 15:03:22.123263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.148050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.148095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:37.025 [2024-11-17 15:03:22.148110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.728 ms 00:26:37.025 [2024-11-17 15:03:22.148117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.148716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.148735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:37.025 [2024-11-17 15:03:22.148746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:26:37.025 [2024-11-17 15:03:22.148754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.236083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.236130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:37.025 [2024-11-17 15:03:22.236151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 87.280 ms 00:26:37.025 [2024-11-17 15:03:22.236159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.263875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.263940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:37.025 [2024-11-17 15:03:22.263964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.619 ms 00:26:37.025 [2024-11-17 15:03:22.263972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.290057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.290101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:37.025 [2024-11-17 15:03:22.290116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.028 ms 00:26:37.025 [2024-11-17 15:03:22.290123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.316236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.316282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:37.025 [2024-11-17 15:03:22.316297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.059 ms 00:26:37.025 [2024-11-17 15:03:22.316304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.316359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.316368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:37.025 [2024-11-17 15:03:22.316383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:37.025 [2024-11-17 15:03:22.316390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.316482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.025 [2024-11-17 15:03:22.316493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:37.025 [2024-11-17 15:03:22.316507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:37.025 [2024-11-17 15:03:22.316514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.025 [2024-11-17 15:03:22.317698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4126.322 ms, result 0 00:26:37.025 { 00:26:37.025 "name": "ftl", 00:26:37.025 "uuid": "306cf6e8-507b-468a-b3be-5907850a0378" 00:26:37.025 } 00:26:37.025 15:03:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:37.025 [2024-11-17 15:03:22.548801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.286 15:03:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:37.286 15:03:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:37.548 [2024-11-17 15:03:22.925140] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:37.548 15:03:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:37.809 [2024-11-17 15:03:23.129490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:37.809 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:38.070 Fill FTL, iteration 1 00:26:38.070 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:38.070 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:38.070 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:38.070 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:38.070 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:38.070 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:38.070 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:38.070 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=79957 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 79957 /var/tmp/spdk.tgt.sock 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79957 ']' 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:38.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.071 15:03:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:38.071 [2024-11-17 15:03:23.553575] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:38.071 [2024-11-17 15:03:23.553855] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79957 ] 00:26:38.332 [2024-11-17 15:03:23.710280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.332 [2024-11-17 15:03:23.836643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.292 15:03:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.292 15:03:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:39.292 15:03:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:39.292 ftln1 00:26:39.292 15:03:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:39.292 15:03:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:39.553 15:03:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:39.553 15:03:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 79957 00:26:39.553 15:03:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79957 ']' 00:26:39.553 15:03:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 79957 00:26:39.553 15:03:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:26:39.553 15:03:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.553 15:03:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79957 00:26:39.553 killing process with pid 79957 00:26:39.553 15:03:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:39.553 15:03:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:39.553 15:03:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79957' 00:26:39.553 15:03:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 79957 00:26:39.553 15:03:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 79957 00:26:40.941 15:03:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:40.941 15:03:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:40.941 [2024-11-17 15:03:26.480609] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:40.941 [2024-11-17 15:03:26.480722] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80004 ] 00:26:41.203 [2024-11-17 15:03:26.638977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.203 [2024-11-17 15:03:26.732211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.589  [2024-11-17T15:03:29.076Z] Copying: 233/1024 [MB] (233 MBps) [2024-11-17T15:03:30.471Z] Copying: 489/1024 [MB] (256 MBps) [2024-11-17T15:03:31.416Z] Copying: 738/1024 [MB] (249 MBps) [2024-11-17T15:03:31.416Z] Copying: 981/1024 [MB] (243 MBps) [2024-11-17T15:03:31.988Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:26:46.445 00:26:46.445 Calculate MD5 checksum, iteration 1 00:26:46.445 15:03:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:46.445 15:03:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:46.445 15:03:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:46.445 15:03:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:46.445 15:03:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:46.445 15:03:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:46.446 15:03:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:46.446 15:03:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:46.446 [2024-11-17 15:03:31.914967] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:46.446 [2024-11-17 15:03:31.915234] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80063 ] 00:26:46.706 [2024-11-17 15:03:32.074112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.706 [2024-11-17 15:03:32.167382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.091  [2024-11-17T15:03:34.234Z] Copying: 628/1024 [MB] (628 MBps) [2024-11-17T15:03:34.836Z] Copying: 1024/1024 [MB] (average 601 MBps) 00:26:49.293 00:26:49.293 15:03:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:49.293 15:03:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:51.841 15:03:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:51.841 Fill FTL, iteration 2 00:26:51.841 15:03:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=afb9be6b0f6f151a09f2c1b9bc4d2e86 00:26:51.841 15:03:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:51.841 15:03:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:51.841 15:03:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:51.841 15:03:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:51.841 15:03:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:51.841 15:03:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:51.841 15:03:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:51.841 15:03:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:51.841 15:03:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:51.841 [2024-11-17 15:03:36.913583] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:51.841 [2024-11-17 15:03:36.913695] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80124 ] 00:26:51.841 [2024-11-17 15:03:37.073906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.841 [2024-11-17 15:03:37.166679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.228  [2024-11-17T15:03:39.720Z] Copying: 204/1024 [MB] (204 MBps) [2024-11-17T15:03:40.662Z] Copying: 457/1024 [MB] (253 MBps) [2024-11-17T15:03:41.604Z] Copying: 697/1024 [MB] (240 MBps) [2024-11-17T15:03:42.548Z] Copying: 869/1024 [MB] (172 MBps) [2024-11-17T15:03:43.118Z] Copying: 1024/1024 [MB] (average 210 MBps) 00:26:57.575 00:26:57.575 Calculate MD5 checksum, iteration 2 00:26:57.575 15:03:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:57.575 15:03:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:57.575 15:03:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:57.575 15:03:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:57.576 15:03:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:57.576 15:03:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:57.576 15:03:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:57.576 15:03:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:57.576 [2024-11-17 15:03:43.062965] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:57.576 [2024-11-17 15:03:43.063083] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80191 ] 00:26:57.837 [2024-11-17 15:03:43.208620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.837 [2024-11-17 15:03:43.283402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.222  [2024-11-17T15:03:45.337Z] Copying: 695/1024 [MB] (695 MBps) [2024-11-17T15:03:46.280Z] Copying: 1024/1024 [MB] (average 678 MBps) 00:27:00.737 00:27:00.737 15:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:00.738 15:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:03.286 15:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:03.286 15:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1ba47d5e5ea333501c9e6bc5d6f9e905 00:27:03.286 15:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:03.286 15:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:03.286 15:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:03.286 [2024-11-17 15:03:48.465392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.286 [2024-11-17 15:03:48.465471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:03.286 [2024-11-17 15:03:48.465491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:03.286 [2024-11-17 15:03:48.465502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.286 [2024-11-17 15:03:48.465530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.286 [2024-11-17 15:03:48.465542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:03.286 [2024-11-17 15:03:48.465552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:03.287 [2024-11-17 15:03:48.465564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.287 [2024-11-17 15:03:48.465587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.287 [2024-11-17 15:03:48.465597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:03.287 [2024-11-17 15:03:48.465607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:03.287 [2024-11-17 15:03:48.465617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.287 [2024-11-17 15:03:48.465696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.294 ms, result 0 00:27:03.287 true 00:27:03.287 15:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:03.287 { 00:27:03.287 "name": "ftl", 00:27:03.287 "properties": [ 00:27:03.287 { 00:27:03.287 "name": "superblock_version", 00:27:03.287 "value": 5, 00:27:03.287 "read-only": true 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "name": "base_device", 00:27:03.287 "bands": [ 00:27:03.287 { 00:27:03.287 "id": 0, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 1, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 2, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 3, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 4, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 5, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 6, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 7, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 8, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 9, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 10, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 11, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 12, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 13, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 14, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 15, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 16, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 17, 00:27:03.287 "state": "FREE", 00:27:03.287 "validity": 0.0 00:27:03.287 } 00:27:03.287 ], 00:27:03.287 "read-only": true 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "name": "cache_device", 00:27:03.287 "type": "bdev", 00:27:03.287 "chunks": [ 00:27:03.287 { 00:27:03.287 "id": 0, 00:27:03.287 "state": "INACTIVE", 00:27:03.287 "utilization": 0.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 1, 00:27:03.287 "state": "CLOSED", 00:27:03.287 "utilization": 1.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 2, 00:27:03.287 "state": "CLOSED", 00:27:03.287 "utilization": 1.0 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 3, 00:27:03.287 "state": "OPEN", 00:27:03.287 "utilization": 0.001953125 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "id": 4, 00:27:03.287 "state": "OPEN", 00:27:03.287 "utilization": 0.0 00:27:03.287 } 00:27:03.287 ], 00:27:03.287 "read-only": true 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "name": "verbose_mode", 00:27:03.287 "value": true, 00:27:03.287 "unit": "", 00:27:03.287 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:03.287 }, 00:27:03.287 { 00:27:03.287 "name": "prep_upgrade_on_shutdown", 00:27:03.287 "value": false, 00:27:03.287 "unit": "", 00:27:03.287 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:03.287 } 00:27:03.287 ] 00:27:03.287 } 00:27:03.287 15:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:03.549 [2024-11-17 15:03:48.893732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.549 [2024-11-17 15:03:48.893792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:03.549 [2024-11-17 15:03:48.893805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:03.549 [2024-11-17 15:03:48.893814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.549 [2024-11-17 15:03:48.893838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.549 [2024-11-17 15:03:48.893848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:03.549 [2024-11-17 15:03:48.893857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:03.549 [2024-11-17 15:03:48.893864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.549 [2024-11-17 15:03:48.893884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.549 [2024-11-17 15:03:48.893892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:03.549 [2024-11-17 15:03:48.893900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:03.549 [2024-11-17 15:03:48.893908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.549 [2024-11-17 15:03:48.893985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.244 ms, result 0 00:27:03.549 true 00:27:03.549 15:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:03.549 15:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:03.549 15:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:03.810 15:03:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:03.810 15:03:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:03.810 15:03:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:03.810 [2024-11-17 15:03:49.322184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.810 [2024-11-17 15:03:49.322243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:03.810 [2024-11-17 15:03:49.322256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:03.810 [2024-11-17 15:03:49.322265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.810 [2024-11-17 15:03:49.322308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.810 [2024-11-17 15:03:49.322319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:03.810 [2024-11-17 15:03:49.322328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:03.810 [2024-11-17 15:03:49.322336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.810 [2024-11-17 15:03:49.322356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.810 [2024-11-17 15:03:49.322365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:03.810 [2024-11-17 15:03:49.322373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:03.810 [2024-11-17 15:03:49.322381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.810 [2024-11-17 15:03:49.322439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.249 ms, result 0 00:27:03.810 true 00:27:04.069 15:03:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:04.069 { 00:27:04.069 "name": "ftl", 00:27:04.069 "properties": [ 00:27:04.069 { 00:27:04.069 "name": "superblock_version", 00:27:04.069 "value": 5, 00:27:04.069 "read-only": true 00:27:04.069 }, 00:27:04.069 { 00:27:04.069 "name": "base_device", 00:27:04.069 "bands": [ 00:27:04.069 { 00:27:04.069 "id": 0, 00:27:04.069 "state": "FREE", 00:27:04.069 "validity": 0.0 00:27:04.069 }, 00:27:04.069 { 00:27:04.069 "id": 1, 00:27:04.069 "state": "FREE", 00:27:04.069 "validity": 0.0 00:27:04.069 }, 00:27:04.069 { 00:27:04.069 "id": 2, 00:27:04.069 "state": "FREE", 00:27:04.069 "validity": 0.0 00:27:04.069 }, 00:27:04.069 { 00:27:04.069 "id": 3, 00:27:04.069 "state": "FREE", 00:27:04.069 "validity": 0.0 00:27:04.069 }, 00:27:04.069 { 00:27:04.069 "id": 4, 00:27:04.069 "state": "FREE", 00:27:04.069 "validity": 0.0 00:27:04.069 }, 00:27:04.069 { 00:27:04.069 "id": 5, 00:27:04.069 "state": "FREE", 00:27:04.069 "validity": 0.0 00:27:04.069 }, 00:27:04.069 { 00:27:04.069 "id": 6, 00:27:04.069 "state": "FREE", 00:27:04.069 "validity": 0.0 00:27:04.069 }, 00:27:04.069 { 00:27:04.069 "id": 7, 00:27:04.069 "state": "FREE", 00:27:04.069 "validity": 0.0 00:27:04.069 }, 00:27:04.069 { 00:27:04.069 "id": 8, 00:27:04.070 "state": "FREE", 00:27:04.070 "validity": 0.0 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 9, 00:27:04.070 "state": "FREE", 00:27:04.070 "validity": 0.0 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 10, 00:27:04.070 "state": "FREE", 00:27:04.070 "validity": 0.0 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 11, 00:27:04.070 "state": "FREE", 00:27:04.070 "validity": 0.0 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 12, 00:27:04.070 "state": "FREE", 00:27:04.070 "validity": 0.0 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 13, 00:27:04.070 "state": "FREE", 00:27:04.070 "validity": 0.0 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 14, 00:27:04.070 "state": "FREE", 00:27:04.070 "validity": 0.0 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 15, 00:27:04.070 "state": "FREE", 00:27:04.070 "validity": 0.0 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 16, 00:27:04.070 "state": "FREE", 00:27:04.070 "validity": 0.0 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 17, 00:27:04.070 "state": "FREE", 00:27:04.070 "validity": 0.0 00:27:04.070 } 00:27:04.070 ], 00:27:04.070 "read-only": true 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "name": "cache_device", 00:27:04.070 "type": "bdev", 00:27:04.070 "chunks": [ 00:27:04.070 { 00:27:04.070 "id": 0, 00:27:04.070 "state": "INACTIVE", 00:27:04.070 "utilization": 0.0 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 1, 00:27:04.070 "state": "CLOSED", 00:27:04.070 "utilization": 1.0 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 2, 00:27:04.070 "state": "CLOSED", 00:27:04.070 "utilization": 1.0 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 3, 00:27:04.070 "state": "OPEN", 00:27:04.070 "utilization": 0.001953125 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "id": 4, 00:27:04.070 "state": "OPEN", 00:27:04.070 "utilization": 0.0 00:27:04.070 } 00:27:04.070 ], 00:27:04.070 "read-only": true 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "name": "verbose_mode", 00:27:04.070 "value": true, 00:27:04.070 "unit": "", 00:27:04.070 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:04.070 }, 00:27:04.070 { 00:27:04.070 "name": "prep_upgrade_on_shutdown", 00:27:04.070 "value": true, 00:27:04.070 "unit": "", 00:27:04.070 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:04.070 } 00:27:04.070 ] 00:27:04.070 } 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 79835 ]] 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 79835 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79835 ']' 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 79835 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79835 00:27:04.070 killing process with pid 79835 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79835' 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 79835 00:27:04.070 15:03:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 79835 00:27:05.007 [2024-11-17 15:03:50.179438] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:05.007 [2024-11-17 15:03:50.192294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.007 [2024-11-17 15:03:50.192332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:05.007 [2024-11-17 15:03:50.192342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:05.007 [2024-11-17 15:03:50.192350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.007 [2024-11-17 15:03:50.192368] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:05.007 [2024-11-17 15:03:50.194529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.007 [2024-11-17 15:03:50.194556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:05.007 [2024-11-17 15:03:50.194565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.149 ms 00:27:05.007 [2024-11-17 15:03:50.194573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.162 [2024-11-17 15:03:58.532055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.163 [2024-11-17 15:03:58.532103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:13.163 [2024-11-17 15:03:58.532115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8337.428 ms 00:27:13.163 [2024-11-17 15:03:58.532125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.163 [2024-11-17 15:03:58.533144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.163 [2024-11-17 15:03:58.533162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:13.163 [2024-11-17 15:03:58.533169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.007 ms 00:27:13.163 [2024-11-17 15:03:58.533176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.163 [2024-11-17 15:03:58.534047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.163 [2024-11-17 15:03:58.534068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:13.163 [2024-11-17 15:03:58.534076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.854 ms 00:27:13.163 [2024-11-17 15:03:58.534083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.163 [2024-11-17 15:03:58.541546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.163 [2024-11-17 15:03:58.541574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:13.163 [2024-11-17 15:03:58.541581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.426 ms 00:27:13.163 [2024-11-17 15:03:58.541587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.163 [2024-11-17 15:03:58.546749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.163 [2024-11-17 15:03:58.546779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:13.163 [2024-11-17 15:03:58.546787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.137 ms 00:27:13.163 [2024-11-17 15:03:58.546795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.163 [2024-11-17 15:03:58.546838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.163 [2024-11-17 15:03:58.546846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:13.163 [2024-11-17 15:03:58.546856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:13.163 [2024-11-17 15:03:58.546862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.163 [2024-11-17 15:03:58.553961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.163 [2024-11-17 15:03:58.553988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:13.163 [2024-11-17 15:03:58.553995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.087 ms 00:27:13.164 [2024-11-17 15:03:58.554000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.164 [2024-11-17 15:03:58.561118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.164 [2024-11-17 15:03:58.561144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:13.164 [2024-11-17 15:03:58.561151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.094 ms 00:27:13.164 [2024-11-17 15:03:58.561156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.164 [2024-11-17 15:03:58.567948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.164 [2024-11-17 15:03:58.567973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:13.164 [2024-11-17 15:03:58.567980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.768 ms 00:27:13.164 [2024-11-17 15:03:58.567986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.164 [2024-11-17 15:03:58.575244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.164 [2024-11-17 15:03:58.575270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:13.164 [2024-11-17 15:03:58.575277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.212 ms 00:27:13.164 [2024-11-17 15:03:58.575282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.164 [2024-11-17 15:03:58.575305] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:13.164 [2024-11-17 15:03:58.575316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:13.164 [2024-11-17 15:03:58.575323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:13.164 [2024-11-17 15:03:58.575335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:13.164 [2024-11-17 15:03:58.575342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:13.165 [2024-11-17 15:03:58.575427] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:13.165 [2024-11-17 15:03:58.575433] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 306cf6e8-507b-468a-b3be-5907850a0378 00:27:13.165 [2024-11-17 15:03:58.575439] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:13.165 [2024-11-17 15:03:58.575444] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:13.165 [2024-11-17 15:03:58.575449] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:13.165 [2024-11-17 15:03:58.575454] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:13.165 [2024-11-17 15:03:58.575460] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:13.165 [2024-11-17 15:03:58.575467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:13.168 [2024-11-17 15:03:58.575473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:13.169 [2024-11-17 15:03:58.575478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:13.169 [2024-11-17 15:03:58.575482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:13.169 [2024-11-17 15:03:58.575487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.169 [2024-11-17 15:03:58.575497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:13.169 [2024-11-17 15:03:58.575503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:27:13.169 [2024-11-17 15:03:58.575508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.169 [2024-11-17 15:03:58.584934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.169 [2024-11-17 15:03:58.584959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:13.169 [2024-11-17 15:03:58.584967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.414 ms 00:27:13.169 [2024-11-17 15:03:58.584976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.169 [2024-11-17 15:03:58.585232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.169 [2024-11-17 15:03:58.585239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:13.169 [2024-11-17 15:03:58.585246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.243 ms 00:27:13.169 [2024-11-17 15:03:58.585252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.169 [2024-11-17 15:03:58.617964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.169 [2024-11-17 15:03:58.617991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:13.169 [2024-11-17 15:03:58.618002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.169 [2024-11-17 15:03:58.618008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.169 [2024-11-17 15:03:58.618029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.169 [2024-11-17 15:03:58.618035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:13.169 [2024-11-17 15:03:58.618041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.169 [2024-11-17 15:03:58.618047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.169 [2024-11-17 15:03:58.618090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.169 [2024-11-17 15:03:58.618098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:13.169 [2024-11-17 15:03:58.618104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.169 [2024-11-17 15:03:58.618110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.169 [2024-11-17 15:03:58.618124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.169 [2024-11-17 15:03:58.618130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:13.173 [2024-11-17 15:03:58.618135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.173 [2024-11-17 15:03:58.618142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.173 [2024-11-17 15:03:58.678014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.173 [2024-11-17 15:03:58.678047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:13.173 [2024-11-17 15:03:58.678056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.173 [2024-11-17 15:03:58.678065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.442 [2024-11-17 15:03:58.726767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.442 [2024-11-17 15:03:58.726802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:13.442 [2024-11-17 15:03:58.726810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.442 [2024-11-17 15:03:58.726816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.442 [2024-11-17 15:03:58.726878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.442 [2024-11-17 15:03:58.726886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:13.442 [2024-11-17 15:03:58.726892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.442 [2024-11-17 15:03:58.726898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.442 [2024-11-17 15:03:58.726941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.442 [2024-11-17 15:03:58.726949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:13.442 [2024-11-17 15:03:58.726955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.442 [2024-11-17 15:03:58.726962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.442 [2024-11-17 15:03:58.727030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.442 [2024-11-17 15:03:58.727037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:13.442 [2024-11-17 15:03:58.727044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.442 [2024-11-17 15:03:58.727049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.442 [2024-11-17 15:03:58.727071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.442 [2024-11-17 15:03:58.727080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:13.442 [2024-11-17 15:03:58.727086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.442 [2024-11-17 15:03:58.727092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.442 [2024-11-17 15:03:58.727121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.442 [2024-11-17 15:03:58.727127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:13.442 [2024-11-17 15:03:58.727134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.442 [2024-11-17 15:03:58.727139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.442 [2024-11-17 15:03:58.727174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.442 [2024-11-17 15:03:58.727181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:13.442 [2024-11-17 15:03:58.727187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.442 [2024-11-17 15:03:58.727193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.442 [2024-11-17 15:03:58.727283] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8534.942 ms, result 0 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80379 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:17.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80379 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80379 ']' 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:17.653 15:04:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:17.653 [2024-11-17 15:04:02.668090] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:17.653 [2024-11-17 15:04:02.668200] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80379 ] 00:27:17.653 [2024-11-17 15:04:02.825792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.653 [2024-11-17 15:04:02.903482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.227 [2024-11-17 15:04:03.467256] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:18.227 [2024-11-17 15:04:03.467304] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:18.227 [2024-11-17 15:04:03.610036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.227 [2024-11-17 15:04:03.610158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:18.227 [2024-11-17 15:04:03.610173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:18.227 [2024-11-17 15:04:03.610179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.227 [2024-11-17 15:04:03.610223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.227 [2024-11-17 15:04:03.610231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:18.227 [2024-11-17 15:04:03.610237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:18.227 [2024-11-17 15:04:03.610243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.227 [2024-11-17 15:04:03.610262] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:18.227 [2024-11-17 15:04:03.610801] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:18.227 [2024-11-17 15:04:03.610813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.227 [2024-11-17 15:04:03.610820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:18.227 [2024-11-17 15:04:03.610826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.558 ms 00:27:18.227 [2024-11-17 15:04:03.610831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.227 [2024-11-17 15:04:03.611773] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:18.227 [2024-11-17 15:04:03.621607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.227 [2024-11-17 15:04:03.621712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:18.227 [2024-11-17 15:04:03.621729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.835 ms 00:27:18.227 [2024-11-17 15:04:03.621735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.227 [2024-11-17 15:04:03.621774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.227 [2024-11-17 15:04:03.621782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:18.227 [2024-11-17 15:04:03.621788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:18.227 [2024-11-17 15:04:03.621793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.227 [2024-11-17 15:04:03.626151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.228 [2024-11-17 15:04:03.626175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:18.228 [2024-11-17 15:04:03.626182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.308 ms 00:27:18.228 [2024-11-17 15:04:03.626188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.228 [2024-11-17 15:04:03.626231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.228 [2024-11-17 15:04:03.626238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:18.228 [2024-11-17 15:04:03.626244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:18.228 [2024-11-17 15:04:03.626249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.228 [2024-11-17 15:04:03.626284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.228 [2024-11-17 15:04:03.626294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:18.228 [2024-11-17 15:04:03.626300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:18.228 [2024-11-17 15:04:03.626305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.228 [2024-11-17 15:04:03.626320] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:18.228 [2024-11-17 15:04:03.628960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.228 [2024-11-17 15:04:03.628988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:18.228 [2024-11-17 15:04:03.628998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.643 ms 00:27:18.228 [2024-11-17 15:04:03.629003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.228 [2024-11-17 15:04:03.629026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.228 [2024-11-17 15:04:03.629033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:18.228 [2024-11-17 15:04:03.629039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:18.228 [2024-11-17 15:04:03.629044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.228 [2024-11-17 15:04:03.629059] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:18.228 [2024-11-17 15:04:03.629075] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:18.228 [2024-11-17 15:04:03.629101] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:18.228 [2024-11-17 15:04:03.629112] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:18.228 [2024-11-17 15:04:03.629190] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:18.228 [2024-11-17 15:04:03.629199] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:18.228 [2024-11-17 15:04:03.629207] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:18.228 [2024-11-17 15:04:03.629214] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:18.228 [2024-11-17 15:04:03.629223] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:18.228 [2024-11-17 15:04:03.629229] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:18.228 [2024-11-17 15:04:03.629234] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:18.228 [2024-11-17 15:04:03.629239] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:18.228 [2024-11-17 15:04:03.629245] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:18.228 [2024-11-17 15:04:03.629251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.228 [2024-11-17 15:04:03.629256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:18.228 [2024-11-17 15:04:03.629262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:27:18.228 [2024-11-17 15:04:03.629267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.228 [2024-11-17 15:04:03.629332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.228 [2024-11-17 15:04:03.629339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:18.228 [2024-11-17 15:04:03.629346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:18.228 [2024-11-17 15:04:03.629352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.228 [2024-11-17 15:04:03.629426] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:18.228 [2024-11-17 15:04:03.629434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:18.228 [2024-11-17 15:04:03.629440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:18.228 [2024-11-17 15:04:03.629446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.228 [2024-11-17 15:04:03.629452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:18.228 [2024-11-17 15:04:03.629457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:18.228 [2024-11-17 15:04:03.629462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:18.228 [2024-11-17 15:04:03.629467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:18.228 [2024-11-17 15:04:03.629473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:18.228 [2024-11-17 15:04:03.629478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.228 [2024-11-17 15:04:03.629483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:18.228 [2024-11-17 15:04:03.629488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:18.228 [2024-11-17 15:04:03.629493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.228 [2024-11-17 15:04:03.629498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:18.228 [2024-11-17 15:04:03.629505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:18.228 [2024-11-17 15:04:03.629510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.228 [2024-11-17 15:04:03.629515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:18.228 [2024-11-17 15:04:03.629520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:18.228 [2024-11-17 15:04:03.629525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.228 [2024-11-17 15:04:03.629530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:18.228 [2024-11-17 15:04:03.629535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:18.228 [2024-11-17 15:04:03.629540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:18.228 [2024-11-17 15:04:03.629544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:18.228 [2024-11-17 15:04:03.629549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:18.228 [2024-11-17 15:04:03.629554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:18.228 [2024-11-17 15:04:03.629563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:18.228 [2024-11-17 15:04:03.629568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:18.228 [2024-11-17 15:04:03.629573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:18.228 [2024-11-17 15:04:03.629578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:18.228 [2024-11-17 15:04:03.629583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:18.228 [2024-11-17 15:04:03.629588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:18.228 [2024-11-17 15:04:03.629593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:18.228 [2024-11-17 15:04:03.629597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:18.228 [2024-11-17 15:04:03.629602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.228 [2024-11-17 15:04:03.629607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:18.228 [2024-11-17 15:04:03.629612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:18.228 [2024-11-17 15:04:03.629617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.228 [2024-11-17 15:04:03.629621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:18.228 [2024-11-17 15:04:03.629627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:18.228 [2024-11-17 15:04:03.629631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.228 [2024-11-17 15:04:03.629637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:18.228 [2024-11-17 15:04:03.629642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:18.228 [2024-11-17 15:04:03.629646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.228 [2024-11-17 15:04:03.629651] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:18.228 [2024-11-17 15:04:03.629657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:18.228 [2024-11-17 15:04:03.629662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:18.228 [2024-11-17 15:04:03.629670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:18.228 [2024-11-17 15:04:03.629676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:18.228 [2024-11-17 15:04:03.629681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:18.228 [2024-11-17 15:04:03.629686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:18.228 [2024-11-17 15:04:03.629691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:18.229 [2024-11-17 15:04:03.629696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:18.229 [2024-11-17 15:04:03.629701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:18.229 [2024-11-17 15:04:03.629707] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:18.229 [2024-11-17 15:04:03.629713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.229 [2024-11-17 15:04:03.629720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:18.229 [2024-11-17 15:04:03.629725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:18.229 [2024-11-17 15:04:03.629730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:18.229 [2024-11-17 15:04:03.629736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:18.229 [2024-11-17 15:04:03.629741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:18.229 [2024-11-17 15:04:03.629747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:18.229 [2024-11-17 15:04:03.629752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:18.229 [2024-11-17 15:04:03.629757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:18.229 [2024-11-17 15:04:03.629763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:18.229 [2024-11-17 15:04:03.629768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:18.229 [2024-11-17 15:04:03.629773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:18.229 [2024-11-17 15:04:03.629778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:18.229 [2024-11-17 15:04:03.629783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:18.229 [2024-11-17 15:04:03.629789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:18.229 [2024-11-17 15:04:03.629795] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:18.229 [2024-11-17 15:04:03.629800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.229 [2024-11-17 15:04:03.629806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:18.229 [2024-11-17 15:04:03.629812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:18.229 [2024-11-17 15:04:03.629818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:18.229 [2024-11-17 15:04:03.629823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:18.229 [2024-11-17 15:04:03.629828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.229 [2024-11-17 15:04:03.629834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:18.229 [2024-11-17 15:04:03.629839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.454 ms 00:27:18.229 [2024-11-17 15:04:03.629846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.229 [2024-11-17 15:04:03.629878] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:18.229 [2024-11-17 15:04:03.629887] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:21.535 [2024-11-17 15:04:06.616980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.535 [2024-11-17 15:04:06.617063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:21.535 [2024-11-17 15:04:06.617082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2987.085 ms 00:27:21.535 [2024-11-17 15:04:06.617093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.535 [2024-11-17 15:04:06.648738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.535 [2024-11-17 15:04:06.649012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:21.535 [2024-11-17 15:04:06.649036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.385 ms 00:27:21.535 [2024-11-17 15:04:06.649047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.535 [2024-11-17 15:04:06.649147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.535 [2024-11-17 15:04:06.649159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:21.535 [2024-11-17 15:04:06.649169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:21.535 [2024-11-17 15:04:06.649178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.535 [2024-11-17 15:04:06.684606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.535 [2024-11-17 15:04:06.684795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:21.535 [2024-11-17 15:04:06.684823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.388 ms 00:27:21.535 [2024-11-17 15:04:06.684832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.535 [2024-11-17 15:04:06.684871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.535 [2024-11-17 15:04:06.684880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:21.535 [2024-11-17 15:04:06.684890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:21.535 [2024-11-17 15:04:06.684897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.535 [2024-11-17 15:04:06.685508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.535 [2024-11-17 15:04:06.685533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:21.535 [2024-11-17 15:04:06.685543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.528 ms 00:27:21.535 [2024-11-17 15:04:06.685562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.535 [2024-11-17 15:04:06.685616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.535 [2024-11-17 15:04:06.685626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:21.535 [2024-11-17 15:04:06.685635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:21.535 [2024-11-17 15:04:06.685643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.535 [2024-11-17 15:04:06.703478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.535 [2024-11-17 15:04:06.703654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:21.535 [2024-11-17 15:04:06.703673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.812 ms 00:27:21.535 [2024-11-17 15:04:06.703682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.535 [2024-11-17 15:04:06.717975] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:21.535 [2024-11-17 15:04:06.718154] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:21.535 [2024-11-17 15:04:06.718174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.535 [2024-11-17 15:04:06.718184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:21.535 [2024-11-17 15:04:06.718194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.333 ms 00:27:21.535 [2024-11-17 15:04:06.718202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.535 [2024-11-17 15:04:06.733289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.535 [2024-11-17 15:04:06.733459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:21.535 [2024-11-17 15:04:06.733480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.007 ms 00:27:21.535 [2024-11-17 15:04:06.733489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.535 [2024-11-17 15:04:06.745938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.535 [2024-11-17 15:04:06.745981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:21.535 [2024-11-17 15:04:06.745993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.330 ms 00:27:21.535 [2024-11-17 15:04:06.746001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.535 [2024-11-17 15:04:06.758230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.535 [2024-11-17 15:04:06.758277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:21.536 [2024-11-17 15:04:06.758288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.180 ms 00:27:21.536 [2024-11-17 15:04:06.758295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.536 [2024-11-17 15:04:06.758998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.536 [2024-11-17 15:04:06.759024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:21.536 [2024-11-17 15:04:06.759036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.584 ms 00:27:21.536 [2024-11-17 15:04:06.759044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.536 [2024-11-17 15:04:06.832285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.536 [2024-11-17 15:04:06.832360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:21.536 [2024-11-17 15:04:06.832378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.216 ms 00:27:21.536 [2024-11-17 15:04:06.832388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.536 [2024-11-17 15:04:06.843621] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:21.536 [2024-11-17 15:04:06.844756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.536 [2024-11-17 15:04:06.844800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:21.536 [2024-11-17 15:04:06.844814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.297 ms 00:27:21.536 [2024-11-17 15:04:06.844822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.536 [2024-11-17 15:04:06.844959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.536 [2024-11-17 15:04:06.844975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:21.536 [2024-11-17 15:04:06.844986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:21.536 [2024-11-17 15:04:06.844995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.536 [2024-11-17 15:04:06.845062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.536 [2024-11-17 15:04:06.845092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:21.536 [2024-11-17 15:04:06.845103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:21.536 [2024-11-17 15:04:06.845111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.536 [2024-11-17 15:04:06.845134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.536 [2024-11-17 15:04:06.845144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:21.536 [2024-11-17 15:04:06.845156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:21.536 [2024-11-17 15:04:06.845164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.536 [2024-11-17 15:04:06.845201] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:21.536 [2024-11-17 15:04:06.845212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.536 [2024-11-17 15:04:06.845221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:21.536 [2024-11-17 15:04:06.845231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:21.536 [2024-11-17 15:04:06.845239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.536 [2024-11-17 15:04:06.870569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.536 [2024-11-17 15:04:06.870626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:21.536 [2024-11-17 15:04:06.870639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.307 ms 00:27:21.536 [2024-11-17 15:04:06.870647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.536 [2024-11-17 15:04:06.870738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.536 [2024-11-17 15:04:06.870749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:21.536 [2024-11-17 15:04:06.870759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:21.536 [2024-11-17 15:04:06.870767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.536 [2024-11-17 15:04:06.872201] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3261.605 ms, result 0 00:27:21.536 [2024-11-17 15:04:06.887006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.536 [2024-11-17 15:04:06.903018] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:21.536 [2024-11-17 15:04:06.911155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:21.536 15:04:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.536 15:04:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:21.536 15:04:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:21.536 15:04:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:21.536 15:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:21.797 [2024-11-17 15:04:07.147238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.797 [2024-11-17 15:04:07.147292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:21.797 [2024-11-17 15:04:07.147306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:21.797 [2024-11-17 15:04:07.147319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.797 [2024-11-17 15:04:07.147343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.797 [2024-11-17 15:04:07.147353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:21.797 [2024-11-17 15:04:07.147362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:21.797 [2024-11-17 15:04:07.147370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.797 [2024-11-17 15:04:07.147391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.797 [2024-11-17 15:04:07.147400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:21.797 [2024-11-17 15:04:07.147409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:21.797 [2024-11-17 15:04:07.147418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.797 [2024-11-17 15:04:07.147481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.236 ms, result 0 00:27:21.797 true 00:27:21.797 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:22.059 { 00:27:22.059 "name": "ftl", 00:27:22.059 "properties": [ 00:27:22.059 { 00:27:22.059 "name": "superblock_version", 00:27:22.059 "value": 5, 00:27:22.059 "read-only": true 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "name": "base_device", 00:27:22.059 "bands": [ 00:27:22.059 { 00:27:22.059 "id": 0, 00:27:22.059 "state": "CLOSED", 00:27:22.059 "validity": 1.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 1, 00:27:22.059 "state": "CLOSED", 00:27:22.059 "validity": 1.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 2, 00:27:22.059 "state": "CLOSED", 00:27:22.059 "validity": 0.007843137254901933 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 3, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 4, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 5, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 6, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 7, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 8, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 9, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 10, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 11, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 12, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 13, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 14, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 15, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 16, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 17, 00:27:22.059 "state": "FREE", 00:27:22.059 "validity": 0.0 00:27:22.059 } 00:27:22.059 ], 00:27:22.059 "read-only": true 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "name": "cache_device", 00:27:22.059 "type": "bdev", 00:27:22.059 "chunks": [ 00:27:22.059 { 00:27:22.059 "id": 0, 00:27:22.059 "state": "INACTIVE", 00:27:22.059 "utilization": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 1, 00:27:22.059 "state": "OPEN", 00:27:22.059 "utilization": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 2, 00:27:22.059 "state": "OPEN", 00:27:22.059 "utilization": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 3, 00:27:22.059 "state": "FREE", 00:27:22.059 "utilization": 0.0 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "id": 4, 00:27:22.059 "state": "FREE", 00:27:22.059 "utilization": 0.0 00:27:22.059 } 00:27:22.059 ], 00:27:22.059 "read-only": true 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "name": "verbose_mode", 00:27:22.059 "value": true, 00:27:22.059 "unit": "", 00:27:22.059 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:22.059 }, 00:27:22.059 { 00:27:22.059 "name": "prep_upgrade_on_shutdown", 00:27:22.059 "value": false, 00:27:22.059 "unit": "", 00:27:22.060 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:22.060 } 00:27:22.060 ] 00:27:22.060 } 00:27:22.060 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:22.060 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:22.060 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:22.321 Validate MD5 checksum, iteration 1 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:22.321 15:04:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:22.583 [2024-11-17 15:04:07.894545] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:22.583 [2024-11-17 15:04:07.894902] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80448 ] 00:27:22.583 [2024-11-17 15:04:08.058754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.843 [2024-11-17 15:04:08.183447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.225  [2024-11-17T15:04:11.156Z] Copying: 495/1024 [MB] (495 MBps) [2024-11-17T15:04:11.156Z] Copying: 974/1024 [MB] (479 MBps) [2024-11-17T15:04:12.088Z] Copying: 1024/1024 [MB] (average 483 MBps) 00:27:26.545 00:27:26.545 15:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:26.545 15:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:29.108 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:29.108 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=afb9be6b0f6f151a09f2c1b9bc4d2e86 00:27:29.108 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ afb9be6b0f6f151a09f2c1b9bc4d2e86 != \a\f\b\9\b\e\6\b\0\f\6\f\1\5\1\a\0\9\f\2\c\1\b\9\b\c\4\d\2\e\8\6 ]] 00:27:29.108 Validate MD5 checksum, iteration 2 00:27:29.108 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:29.108 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:29.108 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:29.108 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:29.108 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:29.108 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:29.108 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:29.108 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:29.108 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:29.108 [2024-11-17 15:04:14.163059] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:29.108 [2024-11-17 15:04:14.163297] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80521 ] 00:27:29.108 [2024-11-17 15:04:14.322272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.108 [2024-11-17 15:04:14.416096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.526  [2024-11-17T15:04:17.010Z] Copying: 570/1024 [MB] (570 MBps) [2024-11-17T15:04:19.548Z] Copying: 1024/1024 [MB] (average 534 MBps) 00:27:34.005 00:27:34.005 15:04:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:34.005 15:04:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1ba47d5e5ea333501c9e6bc5d6f9e905 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1ba47d5e5ea333501c9e6bc5d6f9e905 != \1\b\a\4\7\d\5\e\5\e\a\3\3\3\5\0\1\c\9\e\6\b\c\5\d\6\f\9\e\9\0\5 ]] 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80379 ]] 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80379 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:35.907 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:35.908 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80601 00:27:35.908 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:35.908 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80601 00:27:35.908 15:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80601 ']' 00:27:35.908 15:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.908 15:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:35.908 15:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.908 15:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:35.908 15:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:35.908 15:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:35.908 [2024-11-17 15:04:21.295668] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:35.908 [2024-11-17 15:04:21.295787] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80601 ] 00:27:35.908 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 80379 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:36.167 [2024-11-17 15:04:21.451129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.167 [2024-11-17 15:04:21.544611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.736 [2024-11-17 15:04:22.171004] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:36.736 [2024-11-17 15:04:22.171236] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:36.998 [2024-11-17 15:04:22.315766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.998 [2024-11-17 15:04:22.315808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:36.998 [2024-11-17 15:04:22.315818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:36.998 [2024-11-17 15:04:22.315838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.998 [2024-11-17 15:04:22.315878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.998 [2024-11-17 15:04:22.315887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:36.998 [2024-11-17 15:04:22.315893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:36.998 [2024-11-17 15:04:22.315899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.998 [2024-11-17 15:04:22.315917] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:36.998 [2024-11-17 15:04:22.316434] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:36.998 [2024-11-17 15:04:22.316452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.998 [2024-11-17 15:04:22.316457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:36.998 [2024-11-17 15:04:22.316464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.541 ms 00:27:36.998 [2024-11-17 15:04:22.316470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.998 [2024-11-17 15:04:22.316707] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:36.998 [2024-11-17 15:04:22.329125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.998 [2024-11-17 15:04:22.329155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:36.998 [2024-11-17 15:04:22.329165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.418 ms 00:27:36.998 [2024-11-17 15:04:22.329172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.998 [2024-11-17 15:04:22.335807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.998 [2024-11-17 15:04:22.335841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:36.998 [2024-11-17 15:04:22.335851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:36.998 [2024-11-17 15:04:22.335857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.998 [2024-11-17 15:04:22.336111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.998 [2024-11-17 15:04:22.336120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:36.998 [2024-11-17 15:04:22.336127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:27:36.998 [2024-11-17 15:04:22.336132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.998 [2024-11-17 15:04:22.336169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.998 [2024-11-17 15:04:22.336178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:36.998 [2024-11-17 15:04:22.336184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:36.998 [2024-11-17 15:04:22.336190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.998 [2024-11-17 15:04:22.336208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.998 [2024-11-17 15:04:22.336213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:36.998 [2024-11-17 15:04:22.336220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:36.998 [2024-11-17 15:04:22.336226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.998 [2024-11-17 15:04:22.336240] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:36.998 [2024-11-17 15:04:22.338566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.998 [2024-11-17 15:04:22.338686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:36.998 [2024-11-17 15:04:22.338699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.329 ms 00:27:36.998 [2024-11-17 15:04:22.338706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.998 [2024-11-17 15:04:22.338730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.998 [2024-11-17 15:04:22.338737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:36.998 [2024-11-17 15:04:22.338743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:36.998 [2024-11-17 15:04:22.338749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.998 [2024-11-17 15:04:22.338765] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:36.998 [2024-11-17 15:04:22.338779] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:36.998 [2024-11-17 15:04:22.338805] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:36.999 [2024-11-17 15:04:22.338818] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:36.999 [2024-11-17 15:04:22.338895] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:36.999 [2024-11-17 15:04:22.338903] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:36.999 [2024-11-17 15:04:22.338911] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:36.999 [2024-11-17 15:04:22.338931] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:36.999 [2024-11-17 15:04:22.338939] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:36.999 [2024-11-17 15:04:22.338945] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:36.999 [2024-11-17 15:04:22.338950] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:36.999 [2024-11-17 15:04:22.338956] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:36.999 [2024-11-17 15:04:22.338962] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:36.999 [2024-11-17 15:04:22.338968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.999 [2024-11-17 15:04:22.338975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:36.999 [2024-11-17 15:04:22.338981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:27:36.999 [2024-11-17 15:04:22.338987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.999 [2024-11-17 15:04:22.339051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.999 [2024-11-17 15:04:22.339058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:36.999 [2024-11-17 15:04:22.339064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:36.999 [2024-11-17 15:04:22.339069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.999 [2024-11-17 15:04:22.339145] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:36.999 [2024-11-17 15:04:22.339152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:36.999 [2024-11-17 15:04:22.339160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:36.999 [2024-11-17 15:04:22.339166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.999 [2024-11-17 15:04:22.339172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:36.999 [2024-11-17 15:04:22.339177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:36.999 [2024-11-17 15:04:22.339182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:36.999 [2024-11-17 15:04:22.339187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:36.999 [2024-11-17 15:04:22.339193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:36.999 [2024-11-17 15:04:22.339198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.999 [2024-11-17 15:04:22.339203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:36.999 [2024-11-17 15:04:22.339209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:36.999 [2024-11-17 15:04:22.339215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.999 [2024-11-17 15:04:22.339220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:36.999 [2024-11-17 15:04:22.339225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:36.999 [2024-11-17 15:04:22.339230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.999 [2024-11-17 15:04:22.339235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:36.999 [2024-11-17 15:04:22.339240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:36.999 [2024-11-17 15:04:22.339245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.999 [2024-11-17 15:04:22.339250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:36.999 [2024-11-17 15:04:22.339255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:36.999 [2024-11-17 15:04:22.339260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:36.999 [2024-11-17 15:04:22.339265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:36.999 [2024-11-17 15:04:22.339274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:36.999 [2024-11-17 15:04:22.339279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:36.999 [2024-11-17 15:04:22.339283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:36.999 [2024-11-17 15:04:22.339288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:36.999 [2024-11-17 15:04:22.339293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:36.999 [2024-11-17 15:04:22.339297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:36.999 [2024-11-17 15:04:22.339302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:36.999 [2024-11-17 15:04:22.339307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:36.999 [2024-11-17 15:04:22.339312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:36.999 [2024-11-17 15:04:22.339317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:36.999 [2024-11-17 15:04:22.339322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.999 [2024-11-17 15:04:22.339327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:36.999 [2024-11-17 15:04:22.339331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:36.999 [2024-11-17 15:04:22.339336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.999 [2024-11-17 15:04:22.339341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:36.999 [2024-11-17 15:04:22.339346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:36.999 [2024-11-17 15:04:22.339350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.999 [2024-11-17 15:04:22.339355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:36.999 [2024-11-17 15:04:22.339360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:36.999 [2024-11-17 15:04:22.339365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.999 [2024-11-17 15:04:22.339373] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:36.999 [2024-11-17 15:04:22.339380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:36.999 [2024-11-17 15:04:22.339385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:36.999 [2024-11-17 15:04:22.339391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.999 [2024-11-17 15:04:22.339396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:36.999 [2024-11-17 15:04:22.339401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:36.999 [2024-11-17 15:04:22.339406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:36.999 [2024-11-17 15:04:22.339411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:36.999 [2024-11-17 15:04:22.339416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:36.999 [2024-11-17 15:04:22.339421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:36.999 [2024-11-17 15:04:22.339427] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:36.999 [2024-11-17 15:04:22.339434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:36.999 [2024-11-17 15:04:22.339440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:36.999 [2024-11-17 15:04:22.339445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:36.999 [2024-11-17 15:04:22.339451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:36.999 [2024-11-17 15:04:22.339456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:36.999 [2024-11-17 15:04:22.339461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:36.999 [2024-11-17 15:04:22.339467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:36.999 [2024-11-17 15:04:22.339471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:36.999 [2024-11-17 15:04:22.339476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:36.999 [2024-11-17 15:04:22.339482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:36.999 [2024-11-17 15:04:22.339487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:36.999 [2024-11-17 15:04:22.339492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:36.999 [2024-11-17 15:04:22.339498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:36.999 [2024-11-17 15:04:22.339503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:36.999 [2024-11-17 15:04:22.339508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:36.999 [2024-11-17 15:04:22.339513] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:36.999 [2024-11-17 15:04:22.339519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:36.999 [2024-11-17 15:04:22.339525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:36.999 [2024-11-17 15:04:22.339530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:36.999 [2024-11-17 15:04:22.339536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:36.999 [2024-11-17 15:04:22.339542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:36.999 [2024-11-17 15:04:22.339550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.999 [2024-11-17 15:04:22.339557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:36.999 [2024-11-17 15:04:22.339563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.458 ms 00:27:36.999 [2024-11-17 15:04:22.339568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.999 [2024-11-17 15:04:22.358818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.358848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:37.000 [2024-11-17 15:04:22.358856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.212 ms 00:27:37.000 [2024-11-17 15:04:22.358863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.358891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.358898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:37.000 [2024-11-17 15:04:22.358904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:37.000 [2024-11-17 15:04:22.358909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.383097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.383124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:37.000 [2024-11-17 15:04:22.383132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.134 ms 00:27:37.000 [2024-11-17 15:04:22.383138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.383158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.383164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:37.000 [2024-11-17 15:04:22.383170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:37.000 [2024-11-17 15:04:22.383176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.383248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.383256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:37.000 [2024-11-17 15:04:22.383263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:37.000 [2024-11-17 15:04:22.383269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.383298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.383304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:37.000 [2024-11-17 15:04:22.383310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:37.000 [2024-11-17 15:04:22.383316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.394879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.394906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:37.000 [2024-11-17 15:04:22.394914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.547 ms 00:27:37.000 [2024-11-17 15:04:22.394933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.395008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.395016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:37.000 [2024-11-17 15:04:22.395023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:37.000 [2024-11-17 15:04:22.395028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.416942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.417108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:37.000 [2024-11-17 15:04:22.417132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.898 ms 00:27:37.000 [2024-11-17 15:04:22.417143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.427175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.427212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:37.000 [2024-11-17 15:04:22.427230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.494 ms 00:27:37.000 [2024-11-17 15:04:22.427240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.471207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.471250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:37.000 [2024-11-17 15:04:22.471265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.904 ms 00:27:37.000 [2024-11-17 15:04:22.471272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.471380] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:37.000 [2024-11-17 15:04:22.471456] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:37.000 [2024-11-17 15:04:22.471530] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:37.000 [2024-11-17 15:04:22.471602] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:37.000 [2024-11-17 15:04:22.471609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.471615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:37.000 [2024-11-17 15:04:22.471622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.299 ms 00:27:37.000 [2024-11-17 15:04:22.471628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.471672] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:37.000 [2024-11-17 15:04:22.471681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.471690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:37.000 [2024-11-17 15:04:22.471697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:37.000 [2024-11-17 15:04:22.471704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.482993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.483096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:37.000 [2024-11-17 15:04:22.483109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.272 ms 00:27:37.000 [2024-11-17 15:04:22.483116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.489695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.489721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:37.000 [2024-11-17 15:04:22.489729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:37.000 [2024-11-17 15:04:22.489735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.000 [2024-11-17 15:04:22.489796] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:37.000 [2024-11-17 15:04:22.489910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.000 [2024-11-17 15:04:22.489937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:37.000 [2024-11-17 15:04:22.489945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.115 ms 00:27:37.000 [2024-11-17 15:04:22.489951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.569 [2024-11-17 15:04:23.028640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.569 [2024-11-17 15:04:23.028710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:37.569 [2024-11-17 15:04:23.028726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 538.012 ms 00:27:37.569 [2024-11-17 15:04:23.028736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.569 [2024-11-17 15:04:23.033256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.569 [2024-11-17 15:04:23.033293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:37.569 [2024-11-17 15:04:23.033303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.441 ms 00:27:37.569 [2024-11-17 15:04:23.033312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.569 [2024-11-17 15:04:23.034252] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:37.569 [2024-11-17 15:04:23.034287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.569 [2024-11-17 15:04:23.034296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:37.569 [2024-11-17 15:04:23.034306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.941 ms 00:27:37.569 [2024-11-17 15:04:23.034313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.569 [2024-11-17 15:04:23.034345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.569 [2024-11-17 15:04:23.034354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:37.569 [2024-11-17 15:04:23.034363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:37.569 [2024-11-17 15:04:23.034370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.569 [2024-11-17 15:04:23.034410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 544.607 ms, result 0 00:27:37.569 [2024-11-17 15:04:23.034447] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:37.569 [2024-11-17 15:04:23.034535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.569 [2024-11-17 15:04:23.034546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:37.569 [2024-11-17 15:04:23.034554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.090 ms 00:27:37.569 [2024-11-17 15:04:23.034561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.140 [2024-11-17 15:04:23.668131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.140 [2024-11-17 15:04:23.668444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:38.140 [2024-11-17 15:04:23.668470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 632.555 ms 00:27:38.140 [2024-11-17 15:04:23.668480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.140 [2024-11-17 15:04:23.673384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.140 [2024-11-17 15:04:23.673435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:38.140 [2024-11-17 15:04:23.673447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.614 ms 00:27:38.140 [2024-11-17 15:04:23.673455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.140 [2024-11-17 15:04:23.674439] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:38.140 [2024-11-17 15:04:23.674490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.140 [2024-11-17 15:04:23.674499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:38.140 [2024-11-17 15:04:23.674509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.997 ms 00:27:38.140 [2024-11-17 15:04:23.674518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.140 [2024-11-17 15:04:23.674558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.140 [2024-11-17 15:04:23.674568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:38.140 [2024-11-17 15:04:23.674576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:38.140 [2024-11-17 15:04:23.674584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.140 [2024-11-17 15:04:23.674625] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 640.166 ms, result 0 00:27:38.140 [2024-11-17 15:04:23.674673] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:38.140 [2024-11-17 15:04:23.674686] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:38.140 [2024-11-17 15:04:23.674696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.140 [2024-11-17 15:04:23.674705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:38.140 [2024-11-17 15:04:23.674715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1184.909 ms 00:27:38.140 [2024-11-17 15:04:23.674723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.140 [2024-11-17 15:04:23.674756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.140 [2024-11-17 15:04:23.674766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:38.140 [2024-11-17 15:04:23.674778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:38.140 [2024-11-17 15:04:23.674786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.401 [2024-11-17 15:04:23.687489] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:38.401 [2024-11-17 15:04:23.687784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.401 [2024-11-17 15:04:23.687824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:38.401 [2024-11-17 15:04:23.687942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.981 ms 00:27:38.401 [2024-11-17 15:04:23.687970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.401 [2024-11-17 15:04:23.688728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.401 [2024-11-17 15:04:23.688848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:38.401 [2024-11-17 15:04:23.688943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.652 ms 00:27:38.401 [2024-11-17 15:04:23.688969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.401 [2024-11-17 15:04:23.691214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.401 [2024-11-17 15:04:23.691333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:38.401 [2024-11-17 15:04:23.691393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.204 ms 00:27:38.401 [2024-11-17 15:04:23.691416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.401 [2024-11-17 15:04:23.691487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.401 [2024-11-17 15:04:23.691513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:38.401 [2024-11-17 15:04:23.691534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:38.401 [2024-11-17 15:04:23.691557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.401 [2024-11-17 15:04:23.691685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.401 [2024-11-17 15:04:23.691877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:38.401 [2024-11-17 15:04:23.691906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:38.401 [2024-11-17 15:04:23.691955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.401 [2024-11-17 15:04:23.691999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.401 [2024-11-17 15:04:23.692021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:38.401 [2024-11-17 15:04:23.692044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:38.401 [2024-11-17 15:04:23.692063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.401 [2024-11-17 15:04:23.692119] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:38.401 [2024-11-17 15:04:23.692148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.401 [2024-11-17 15:04:23.692169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:38.401 [2024-11-17 15:04:23.692188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:38.401 [2024-11-17 15:04:23.692207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.401 [2024-11-17 15:04:23.692299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.401 [2024-11-17 15:04:23.692553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:38.401 [2024-11-17 15:04:23.692579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:38.401 [2024-11-17 15:04:23.692599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.401 [2024-11-17 15:04:23.693842] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1377.551 ms, result 0 00:27:38.401 [2024-11-17 15:04:23.708420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.401 [2024-11-17 15:04:23.724414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:38.401 [2024-11-17 15:04:23.733476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:38.401 Validate MD5 checksum, iteration 1 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:38.402 15:04:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:38.402 [2024-11-17 15:04:23.872195] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:38.402 [2024-11-17 15:04:23.872442] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80636 ] 00:27:38.663 [2024-11-17 15:04:24.028933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.663 [2024-11-17 15:04:24.104681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.047  [2024-11-17T15:04:26.163Z] Copying: 723/1024 [MB] (723 MBps) [2024-11-17T15:04:27.550Z] Copying: 1024/1024 [MB] (average 706 MBps) 00:27:42.007 00:27:42.007 15:04:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:42.007 15:04:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:43.919 15:04:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:43.919 15:04:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=afb9be6b0f6f151a09f2c1b9bc4d2e86 00:27:43.919 15:04:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ afb9be6b0f6f151a09f2c1b9bc4d2e86 != \a\f\b\9\b\e\6\b\0\f\6\f\1\5\1\a\0\9\f\2\c\1\b\9\b\c\4\d\2\e\8\6 ]] 00:27:43.919 Validate MD5 checksum, iteration 2 00:27:43.919 15:04:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:43.919 15:04:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:43.919 15:04:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:43.919 15:04:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:43.919 15:04:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:43.919 15:04:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:43.919 15:04:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:43.919 15:04:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:43.919 15:04:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:43.919 [2024-11-17 15:04:29.326583] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:43.919 [2024-11-17 15:04:29.327243] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80696 ] 00:27:44.180 [2024-11-17 15:04:29.486567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.180 [2024-11-17 15:04:29.591680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.094  [2024-11-17T15:04:31.898Z] Copying: 640/1024 [MB] (640 MBps) [2024-11-17T15:04:32.842Z] Copying: 1024/1024 [MB] (average 608 MBps) 00:27:47.299 00:27:47.299 15:04:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:47.299 15:04:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:49.234 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:49.234 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1ba47d5e5ea333501c9e6bc5d6f9e905 00:27:49.234 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1ba47d5e5ea333501c9e6bc5d6f9e905 != \1\b\a\4\7\d\5\e\5\e\a\3\3\3\5\0\1\c\9\e\6\b\c\5\d\6\f\9\e\9\0\5 ]] 00:27:49.234 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:49.234 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:49.234 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:49.234 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:49.234 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:49.234 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80601 ]] 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80601 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80601 ']' 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80601 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80601 00:27:49.495 killing process with pid 80601 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80601' 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80601 00:27:49.495 15:04:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80601 00:27:50.068 [2024-11-17 15:04:35.399086] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:50.068 [2024-11-17 15:04:35.409212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.068 [2024-11-17 15:04:35.409245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:50.068 [2024-11-17 15:04:35.409255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:50.068 [2024-11-17 15:04:35.409261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.068 [2024-11-17 15:04:35.409278] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:50.068 [2024-11-17 15:04:35.411337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.068 [2024-11-17 15:04:35.411362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:50.068 [2024-11-17 15:04:35.411370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.048 ms 00:27:50.068 [2024-11-17 15:04:35.411379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.068 [2024-11-17 15:04:35.411555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.068 [2024-11-17 15:04:35.411563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:50.068 [2024-11-17 15:04:35.411570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.159 ms 00:27:50.069 [2024-11-17 15:04:35.411575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.412713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.069 [2024-11-17 15:04:35.412730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:50.069 [2024-11-17 15:04:35.412737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.126 ms 00:27:50.069 [2024-11-17 15:04:35.412743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.413618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.069 [2024-11-17 15:04:35.413636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:50.069 [2024-11-17 15:04:35.413644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.848 ms 00:27:50.069 [2024-11-17 15:04:35.413650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.421300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.069 [2024-11-17 15:04:35.421326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:50.069 [2024-11-17 15:04:35.421333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.624 ms 00:27:50.069 [2024-11-17 15:04:35.421342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.425391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.069 [2024-11-17 15:04:35.425417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:50.069 [2024-11-17 15:04:35.425425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.023 ms 00:27:50.069 [2024-11-17 15:04:35.425432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.425489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.069 [2024-11-17 15:04:35.425497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:50.069 [2024-11-17 15:04:35.425504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:50.069 [2024-11-17 15:04:35.425510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.432825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.069 [2024-11-17 15:04:35.432850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:50.069 [2024-11-17 15:04:35.432857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.300 ms 00:27:50.069 [2024-11-17 15:04:35.432862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.440002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.069 [2024-11-17 15:04:35.440026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:50.069 [2024-11-17 15:04:35.440033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.115 ms 00:27:50.069 [2024-11-17 15:04:35.440039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.447048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.069 [2024-11-17 15:04:35.447156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:50.069 [2024-11-17 15:04:35.447168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.985 ms 00:27:50.069 [2024-11-17 15:04:35.447174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.454430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.069 [2024-11-17 15:04:35.454532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:50.069 [2024-11-17 15:04:35.454543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.214 ms 00:27:50.069 [2024-11-17 15:04:35.454548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.454569] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:50.069 [2024-11-17 15:04:35.454580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:50.069 [2024-11-17 15:04:35.454587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:50.069 [2024-11-17 15:04:35.454593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:50.069 [2024-11-17 15:04:35.454599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:50.069 [2024-11-17 15:04:35.454685] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:50.069 [2024-11-17 15:04:35.454690] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 306cf6e8-507b-468a-b3be-5907850a0378 00:27:50.069 [2024-11-17 15:04:35.454696] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:50.069 [2024-11-17 15:04:35.454702] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:50.069 [2024-11-17 15:04:35.454707] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:50.069 [2024-11-17 15:04:35.454712] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:50.069 [2024-11-17 15:04:35.454717] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:50.069 [2024-11-17 15:04:35.454724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:50.069 [2024-11-17 15:04:35.454729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:50.069 [2024-11-17 15:04:35.454734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:50.069 [2024-11-17 15:04:35.454739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:50.069 [2024-11-17 15:04:35.454744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.069 [2024-11-17 15:04:35.454756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:50.069 [2024-11-17 15:04:35.454763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.175 ms 00:27:50.069 [2024-11-17 15:04:35.454769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.464590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.069 [2024-11-17 15:04:35.464613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:50.069 [2024-11-17 15:04:35.464622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.801 ms 00:27:50.069 [2024-11-17 15:04:35.464628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.464897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.069 [2024-11-17 15:04:35.464903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:50.069 [2024-11-17 15:04:35.464910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 00:27:50.069 [2024-11-17 15:04:35.464915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.498068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:50.069 [2024-11-17 15:04:35.498160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:50.069 [2024-11-17 15:04:35.498202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:50.069 [2024-11-17 15:04:35.498220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.498257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:50.069 [2024-11-17 15:04:35.498273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:50.069 [2024-11-17 15:04:35.498288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:50.069 [2024-11-17 15:04:35.498302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.069 [2024-11-17 15:04:35.498372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:50.069 [2024-11-17 15:04:35.498392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:50.069 [2024-11-17 15:04:35.498408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:50.069 [2024-11-17 15:04:35.498449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.070 [2024-11-17 15:04:35.498571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:50.070 [2024-11-17 15:04:35.498593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:50.070 [2024-11-17 15:04:35.498633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:50.070 [2024-11-17 15:04:35.498650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.070 [2024-11-17 15:04:35.557158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:50.070 [2024-11-17 15:04:35.557283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:50.070 [2024-11-17 15:04:35.557323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:50.070 [2024-11-17 15:04:35.557340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.070 [2024-11-17 15:04:35.605217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:50.070 [2024-11-17 15:04:35.605332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:50.070 [2024-11-17 15:04:35.605370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:50.070 [2024-11-17 15:04:35.605387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.070 [2024-11-17 15:04:35.605446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:50.070 [2024-11-17 15:04:35.605465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:50.070 [2024-11-17 15:04:35.605480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:50.070 [2024-11-17 15:04:35.605495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.070 [2024-11-17 15:04:35.605545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:50.070 [2024-11-17 15:04:35.605563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:50.070 [2024-11-17 15:04:35.605583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:50.070 [2024-11-17 15:04:35.605633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.070 [2024-11-17 15:04:35.605717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:50.070 [2024-11-17 15:04:35.605823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:50.070 [2024-11-17 15:04:35.605866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:50.070 [2024-11-17 15:04:35.605880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.070 [2024-11-17 15:04:35.605914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:50.070 [2024-11-17 15:04:35.605942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:50.070 [2024-11-17 15:04:35.605957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:50.070 [2024-11-17 15:04:35.605973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.070 [2024-11-17 15:04:35.606007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:50.070 [2024-11-17 15:04:35.606024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:50.070 [2024-11-17 15:04:35.606079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:50.070 [2024-11-17 15:04:35.606096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.070 [2024-11-17 15:04:35.606139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:50.070 [2024-11-17 15:04:35.606158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:50.070 [2024-11-17 15:04:35.606175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:50.070 [2024-11-17 15:04:35.606189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.070 [2024-11-17 15:04:35.606286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 197.052 ms, result 0 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:51.013 Remove shared memory files 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80379 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:51.013 ************************************ 00:27:51.013 END TEST ftl_upgrade_shutdown 00:27:51.013 ************************************ 00:27:51.013 00:27:51.013 real 1m21.814s 00:27:51.013 user 1m53.270s 00:27:51.013 sys 0m20.139s 00:27:51.013 15:04:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.014 15:04:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:51.014 15:04:36 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:27:51.014 15:04:36 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:51.014 15:04:36 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:27:51.014 15:04:36 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.014 15:04:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:51.014 ************************************ 00:27:51.014 START TEST ftl_restore_fast 00:27:51.014 ************************************ 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:51.014 * Looking for test storage... 00:27:51.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:51.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.014 --rc genhtml_branch_coverage=1 00:27:51.014 --rc genhtml_function_coverage=1 00:27:51.014 --rc genhtml_legend=1 00:27:51.014 --rc geninfo_all_blocks=1 00:27:51.014 --rc geninfo_unexecuted_blocks=1 00:27:51.014 00:27:51.014 ' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:51.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.014 --rc genhtml_branch_coverage=1 00:27:51.014 --rc genhtml_function_coverage=1 00:27:51.014 --rc genhtml_legend=1 00:27:51.014 --rc geninfo_all_blocks=1 00:27:51.014 --rc geninfo_unexecuted_blocks=1 00:27:51.014 00:27:51.014 ' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:51.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.014 --rc genhtml_branch_coverage=1 00:27:51.014 --rc genhtml_function_coverage=1 00:27:51.014 --rc genhtml_legend=1 00:27:51.014 --rc geninfo_all_blocks=1 00:27:51.014 --rc geninfo_unexecuted_blocks=1 00:27:51.014 00:27:51.014 ' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:51.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.014 --rc genhtml_branch_coverage=1 00:27:51.014 --rc genhtml_function_coverage=1 00:27:51.014 --rc genhtml_legend=1 00:27:51.014 --rc geninfo_all_blocks=1 00:27:51.014 --rc geninfo_unexecuted_blocks=1 00:27:51.014 00:27:51.014 ' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.IBYcVXkFzS 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=80848 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 80848 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 80848 ']' 00:27:51.014 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.015 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.015 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.015 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.015 15:04:36 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:51.276 [2024-11-17 15:04:36.599281] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:51.276 [2024-11-17 15:04:36.599481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80848 ] 00:27:51.276 [2024-11-17 15:04:36.748469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.537 [2024-11-17 15:04:36.824888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:52.109 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:52.371 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:52.371 { 00:27:52.371 "name": "nvme0n1", 00:27:52.371 "aliases": [ 00:27:52.371 "0c926539-6ef6-4213-b8ef-3fba7aee12a7" 00:27:52.371 ], 00:27:52.371 "product_name": "NVMe disk", 00:27:52.371 "block_size": 4096, 00:27:52.371 "num_blocks": 1310720, 00:27:52.371 "uuid": "0c926539-6ef6-4213-b8ef-3fba7aee12a7", 00:27:52.371 "numa_id": -1, 00:27:52.371 "assigned_rate_limits": { 00:27:52.371 "rw_ios_per_sec": 0, 00:27:52.371 "rw_mbytes_per_sec": 0, 00:27:52.371 "r_mbytes_per_sec": 0, 00:27:52.371 "w_mbytes_per_sec": 0 00:27:52.371 }, 00:27:52.371 "claimed": true, 00:27:52.371 "claim_type": "read_many_write_one", 00:27:52.371 "zoned": false, 00:27:52.371 "supported_io_types": { 00:27:52.371 "read": true, 00:27:52.371 "write": true, 00:27:52.371 "unmap": true, 00:27:52.371 "flush": true, 00:27:52.371 "reset": true, 00:27:52.371 "nvme_admin": true, 00:27:52.371 "nvme_io": true, 00:27:52.371 "nvme_io_md": false, 00:27:52.371 "write_zeroes": true, 00:27:52.371 "zcopy": false, 00:27:52.371 "get_zone_info": false, 00:27:52.371 "zone_management": false, 00:27:52.371 "zone_append": false, 00:27:52.371 "compare": true, 00:27:52.371 "compare_and_write": false, 00:27:52.371 "abort": true, 00:27:52.371 "seek_hole": false, 00:27:52.371 "seek_data": false, 00:27:52.371 "copy": true, 00:27:52.371 "nvme_iov_md": false 00:27:52.371 }, 00:27:52.371 "driver_specific": { 00:27:52.371 "nvme": [ 00:27:52.371 { 00:27:52.371 "pci_address": "0000:00:11.0", 00:27:52.371 "trid": { 00:27:52.371 "trtype": "PCIe", 00:27:52.371 "traddr": "0000:00:11.0" 00:27:52.371 }, 00:27:52.371 "ctrlr_data": { 00:27:52.371 "cntlid": 0, 00:27:52.371 "vendor_id": "0x1b36", 00:27:52.371 "model_number": "QEMU NVMe Ctrl", 00:27:52.371 "serial_number": "12341", 00:27:52.371 "firmware_revision": "8.0.0", 00:27:52.371 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:52.371 "oacs": { 00:27:52.371 "security": 0, 00:27:52.371 "format": 1, 00:27:52.371 "firmware": 0, 00:27:52.371 "ns_manage": 1 00:27:52.371 }, 00:27:52.371 "multi_ctrlr": false, 00:27:52.371 "ana_reporting": false 00:27:52.371 }, 00:27:52.371 "vs": { 00:27:52.371 "nvme_version": "1.4" 00:27:52.371 }, 00:27:52.371 "ns_data": { 00:27:52.371 "id": 1, 00:27:52.371 "can_share": false 00:27:52.371 } 00:27:52.371 } 00:27:52.371 ], 00:27:52.371 "mp_policy": "active_passive" 00:27:52.371 } 00:27:52.371 } 00:27:52.371 ]' 00:27:52.371 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:52.371 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:52.371 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:52.371 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:52.371 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:52.371 15:04:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:27:52.371 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:27:52.371 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:52.371 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:27:52.371 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:52.371 15:04:37 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:52.632 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=dd8f1b41-b803-4a43-97c5-5d08df6642e6 00:27:52.632 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:27:52.632 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd8f1b41-b803-4a43-97c5-5d08df6642e6 00:27:52.893 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=12aa37be-7ca4-473a-87df-6586a041f523 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 12aa37be-7ca4-473a-87df-6586a041f523 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=e1b50792-780a-473c-97d1-d17fd882aba2 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e1b50792-780a-473c-97d1-d17fd882aba2 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=e1b50792-780a-473c-97d1-d17fd882aba2 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size e1b50792-780a-473c-97d1-d17fd882aba2 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e1b50792-780a-473c-97d1-d17fd882aba2 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:53.154 15:04:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e1b50792-780a-473c-97d1-d17fd882aba2 00:27:53.415 15:04:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:53.415 { 00:27:53.415 "name": "e1b50792-780a-473c-97d1-d17fd882aba2", 00:27:53.415 "aliases": [ 00:27:53.415 "lvs/nvme0n1p0" 00:27:53.415 ], 00:27:53.415 "product_name": "Logical Volume", 00:27:53.415 "block_size": 4096, 00:27:53.415 "num_blocks": 26476544, 00:27:53.415 "uuid": "e1b50792-780a-473c-97d1-d17fd882aba2", 00:27:53.416 "assigned_rate_limits": { 00:27:53.416 "rw_ios_per_sec": 0, 00:27:53.416 "rw_mbytes_per_sec": 0, 00:27:53.416 "r_mbytes_per_sec": 0, 00:27:53.416 "w_mbytes_per_sec": 0 00:27:53.416 }, 00:27:53.416 "claimed": false, 00:27:53.416 "zoned": false, 00:27:53.416 "supported_io_types": { 00:27:53.416 "read": true, 00:27:53.416 "write": true, 00:27:53.416 "unmap": true, 00:27:53.416 "flush": false, 00:27:53.416 "reset": true, 00:27:53.416 "nvme_admin": false, 00:27:53.416 "nvme_io": false, 00:27:53.416 "nvme_io_md": false, 00:27:53.416 "write_zeroes": true, 00:27:53.416 "zcopy": false, 00:27:53.416 "get_zone_info": false, 00:27:53.416 "zone_management": false, 00:27:53.416 "zone_append": false, 00:27:53.416 "compare": false, 00:27:53.416 "compare_and_write": false, 00:27:53.416 "abort": false, 00:27:53.416 "seek_hole": true, 00:27:53.416 "seek_data": true, 00:27:53.416 "copy": false, 00:27:53.416 "nvme_iov_md": false 00:27:53.416 }, 00:27:53.416 "driver_specific": { 00:27:53.416 "lvol": { 00:27:53.416 "lvol_store_uuid": "12aa37be-7ca4-473a-87df-6586a041f523", 00:27:53.416 "base_bdev": "nvme0n1", 00:27:53.416 "thin_provision": true, 00:27:53.416 "num_allocated_clusters": 0, 00:27:53.416 "snapshot": false, 00:27:53.416 "clone": false, 00:27:53.416 "esnap_clone": false 00:27:53.416 } 00:27:53.416 } 00:27:53.416 } 00:27:53.416 ]' 00:27:53.416 15:04:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:53.416 15:04:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:53.416 15:04:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:53.416 15:04:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:53.416 15:04:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:53.416 15:04:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:27:53.416 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:27:53.416 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:27:53.416 15:04:38 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:53.677 15:04:39 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:53.677 15:04:39 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:53.677 15:04:39 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size e1b50792-780a-473c-97d1-d17fd882aba2 00:27:53.677 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e1b50792-780a-473c-97d1-d17fd882aba2 00:27:53.677 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:53.677 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:53.677 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:53.677 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e1b50792-780a-473c-97d1-d17fd882aba2 00:27:53.938 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:53.938 { 00:27:53.938 "name": "e1b50792-780a-473c-97d1-d17fd882aba2", 00:27:53.938 "aliases": [ 00:27:53.938 "lvs/nvme0n1p0" 00:27:53.938 ], 00:27:53.938 "product_name": "Logical Volume", 00:27:53.938 "block_size": 4096, 00:27:53.938 "num_blocks": 26476544, 00:27:53.938 "uuid": "e1b50792-780a-473c-97d1-d17fd882aba2", 00:27:53.938 "assigned_rate_limits": { 00:27:53.938 "rw_ios_per_sec": 0, 00:27:53.938 "rw_mbytes_per_sec": 0, 00:27:53.938 "r_mbytes_per_sec": 0, 00:27:53.938 "w_mbytes_per_sec": 0 00:27:53.938 }, 00:27:53.938 "claimed": false, 00:27:53.938 "zoned": false, 00:27:53.938 "supported_io_types": { 00:27:53.938 "read": true, 00:27:53.938 "write": true, 00:27:53.938 "unmap": true, 00:27:53.938 "flush": false, 00:27:53.938 "reset": true, 00:27:53.938 "nvme_admin": false, 00:27:53.938 "nvme_io": false, 00:27:53.938 "nvme_io_md": false, 00:27:53.938 "write_zeroes": true, 00:27:53.938 "zcopy": false, 00:27:53.938 "get_zone_info": false, 00:27:53.938 "zone_management": false, 00:27:53.938 "zone_append": false, 00:27:53.938 "compare": false, 00:27:53.938 "compare_and_write": false, 00:27:53.938 "abort": false, 00:27:53.938 "seek_hole": true, 00:27:53.938 "seek_data": true, 00:27:53.938 "copy": false, 00:27:53.938 "nvme_iov_md": false 00:27:53.938 }, 00:27:53.938 "driver_specific": { 00:27:53.938 "lvol": { 00:27:53.938 "lvol_store_uuid": "12aa37be-7ca4-473a-87df-6586a041f523", 00:27:53.938 "base_bdev": "nvme0n1", 00:27:53.938 "thin_provision": true, 00:27:53.938 "num_allocated_clusters": 0, 00:27:53.938 "snapshot": false, 00:27:53.938 "clone": false, 00:27:53.938 "esnap_clone": false 00:27:53.938 } 00:27:53.938 } 00:27:53.938 } 00:27:53.938 ]' 00:27:53.938 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:53.938 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:53.938 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:53.938 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:53.938 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:53.938 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:27:53.938 15:04:39 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:27:53.938 15:04:39 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:54.199 15:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:27:54.199 15:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size e1b50792-780a-473c-97d1-d17fd882aba2 00:27:54.199 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e1b50792-780a-473c-97d1-d17fd882aba2 00:27:54.199 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:54.199 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:54.199 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:54.199 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e1b50792-780a-473c-97d1-d17fd882aba2 00:27:54.461 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:54.461 { 00:27:54.461 "name": "e1b50792-780a-473c-97d1-d17fd882aba2", 00:27:54.461 "aliases": [ 00:27:54.461 "lvs/nvme0n1p0" 00:27:54.461 ], 00:27:54.461 "product_name": "Logical Volume", 00:27:54.461 "block_size": 4096, 00:27:54.461 "num_blocks": 26476544, 00:27:54.461 "uuid": "e1b50792-780a-473c-97d1-d17fd882aba2", 00:27:54.461 "assigned_rate_limits": { 00:27:54.461 "rw_ios_per_sec": 0, 00:27:54.461 "rw_mbytes_per_sec": 0, 00:27:54.461 "r_mbytes_per_sec": 0, 00:27:54.461 "w_mbytes_per_sec": 0 00:27:54.461 }, 00:27:54.461 "claimed": false, 00:27:54.461 "zoned": false, 00:27:54.461 "supported_io_types": { 00:27:54.461 "read": true, 00:27:54.461 "write": true, 00:27:54.461 "unmap": true, 00:27:54.461 "flush": false, 00:27:54.461 "reset": true, 00:27:54.461 "nvme_admin": false, 00:27:54.461 "nvme_io": false, 00:27:54.461 "nvme_io_md": false, 00:27:54.461 "write_zeroes": true, 00:27:54.461 "zcopy": false, 00:27:54.461 "get_zone_info": false, 00:27:54.461 "zone_management": false, 00:27:54.461 "zone_append": false, 00:27:54.461 "compare": false, 00:27:54.461 "compare_and_write": false, 00:27:54.461 "abort": false, 00:27:54.461 "seek_hole": true, 00:27:54.461 "seek_data": true, 00:27:54.461 "copy": false, 00:27:54.461 "nvme_iov_md": false 00:27:54.461 }, 00:27:54.461 "driver_specific": { 00:27:54.461 "lvol": { 00:27:54.461 "lvol_store_uuid": "12aa37be-7ca4-473a-87df-6586a041f523", 00:27:54.461 "base_bdev": "nvme0n1", 00:27:54.461 "thin_provision": true, 00:27:54.461 "num_allocated_clusters": 0, 00:27:54.461 "snapshot": false, 00:27:54.461 "clone": false, 00:27:54.461 "esnap_clone": false 00:27:54.461 } 00:27:54.461 } 00:27:54.461 } 00:27:54.461 ]' 00:27:54.461 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:54.461 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:54.461 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:54.461 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:54.462 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:54.462 15:04:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:27:54.462 15:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:27:54.462 15:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e1b50792-780a-473c-97d1-d17fd882aba2 --l2p_dram_limit 10' 00:27:54.462 15:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:27:54.462 15:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:54.462 15:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:54.462 15:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:27:54.462 15:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:27:54.462 15:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e1b50792-780a-473c-97d1-d17fd882aba2 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:27:54.723 [2024-11-17 15:04:40.167850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.723 [2024-11-17 15:04:40.168081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:54.723 [2024-11-17 15:04:40.168112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:54.723 [2024-11-17 15:04:40.168122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.723 [2024-11-17 15:04:40.168212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.723 [2024-11-17 15:04:40.168223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:54.723 [2024-11-17 15:04:40.168235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:54.723 [2024-11-17 15:04:40.168244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.723 [2024-11-17 15:04:40.168274] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:54.723 [2024-11-17 15:04:40.169099] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:54.723 [2024-11-17 15:04:40.169131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.723 [2024-11-17 15:04:40.169140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:54.723 [2024-11-17 15:04:40.169152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:27:54.723 [2024-11-17 15:04:40.169160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.723 [2024-11-17 15:04:40.169203] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f8ab1106-8587-4bca-959a-3c4b3782d3c4 00:27:54.723 [2024-11-17 15:04:40.170890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.723 [2024-11-17 15:04:40.170955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:54.723 [2024-11-17 15:04:40.170968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:54.723 [2024-11-17 15:04:40.170984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.723 [2024-11-17 15:04:40.179550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.723 [2024-11-17 15:04:40.179720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:54.724 [2024-11-17 15:04:40.179738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.481 ms 00:27:54.724 [2024-11-17 15:04:40.179750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.724 [2024-11-17 15:04:40.179867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.724 [2024-11-17 15:04:40.179881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:54.724 [2024-11-17 15:04:40.179890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:54.724 [2024-11-17 15:04:40.179904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.724 [2024-11-17 15:04:40.179993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.724 [2024-11-17 15:04:40.180007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:54.724 [2024-11-17 15:04:40.180019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:54.724 [2024-11-17 15:04:40.180029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.724 [2024-11-17 15:04:40.180054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:54.724 [2024-11-17 15:04:40.184401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.724 [2024-11-17 15:04:40.184441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:54.724 [2024-11-17 15:04:40.184454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.350 ms 00:27:54.724 [2024-11-17 15:04:40.184462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.724 [2024-11-17 15:04:40.184503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.724 [2024-11-17 15:04:40.184512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:54.724 [2024-11-17 15:04:40.184522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:54.724 [2024-11-17 15:04:40.184530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.724 [2024-11-17 15:04:40.184567] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:54.724 [2024-11-17 15:04:40.184714] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:54.724 [2024-11-17 15:04:40.184731] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:54.724 [2024-11-17 15:04:40.184743] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:54.724 [2024-11-17 15:04:40.184756] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:54.724 [2024-11-17 15:04:40.184765] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:54.724 [2024-11-17 15:04:40.184775] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:54.724 [2024-11-17 15:04:40.184786] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:54.724 [2024-11-17 15:04:40.184796] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:54.724 [2024-11-17 15:04:40.184803] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:54.724 [2024-11-17 15:04:40.184814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.724 [2024-11-17 15:04:40.184822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:54.724 [2024-11-17 15:04:40.184833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:27:54.724 [2024-11-17 15:04:40.184851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.724 [2024-11-17 15:04:40.184961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.724 [2024-11-17 15:04:40.184971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:54.724 [2024-11-17 15:04:40.184981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:54.724 [2024-11-17 15:04:40.184988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.724 [2024-11-17 15:04:40.185093] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:54.724 [2024-11-17 15:04:40.185103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:54.724 [2024-11-17 15:04:40.185113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:54.724 [2024-11-17 15:04:40.185121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.724 [2024-11-17 15:04:40.185131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:54.724 [2024-11-17 15:04:40.185138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:54.724 [2024-11-17 15:04:40.185147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:54.724 [2024-11-17 15:04:40.185155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:54.724 [2024-11-17 15:04:40.185164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:54.724 [2024-11-17 15:04:40.185171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:54.724 [2024-11-17 15:04:40.185180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:54.724 [2024-11-17 15:04:40.185188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:54.724 [2024-11-17 15:04:40.185197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:54.724 [2024-11-17 15:04:40.185205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:54.724 [2024-11-17 15:04:40.185214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:54.724 [2024-11-17 15:04:40.185221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.724 [2024-11-17 15:04:40.185233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:54.724 [2024-11-17 15:04:40.185245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:54.724 [2024-11-17 15:04:40.185254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.724 [2024-11-17 15:04:40.185261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:54.724 [2024-11-17 15:04:40.185270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:54.724 [2024-11-17 15:04:40.185276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:54.724 [2024-11-17 15:04:40.185285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:54.724 [2024-11-17 15:04:40.185292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:54.724 [2024-11-17 15:04:40.185301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:54.724 [2024-11-17 15:04:40.185307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:54.724 [2024-11-17 15:04:40.185316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:54.724 [2024-11-17 15:04:40.185323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:54.724 [2024-11-17 15:04:40.185332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:54.724 [2024-11-17 15:04:40.185338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:54.724 [2024-11-17 15:04:40.185347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:54.724 [2024-11-17 15:04:40.185353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:54.724 [2024-11-17 15:04:40.185365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:54.724 [2024-11-17 15:04:40.185371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:54.724 [2024-11-17 15:04:40.185380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:54.724 [2024-11-17 15:04:40.185387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:54.724 [2024-11-17 15:04:40.185395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:54.724 [2024-11-17 15:04:40.185401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:54.724 [2024-11-17 15:04:40.185410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:54.724 [2024-11-17 15:04:40.185417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.724 [2024-11-17 15:04:40.185426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:54.724 [2024-11-17 15:04:40.185432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:54.724 [2024-11-17 15:04:40.185441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.724 [2024-11-17 15:04:40.185447] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:54.725 [2024-11-17 15:04:40.185458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:54.725 [2024-11-17 15:04:40.185465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:54.725 [2024-11-17 15:04:40.185475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.725 [2024-11-17 15:04:40.185484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:54.725 [2024-11-17 15:04:40.185495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:54.725 [2024-11-17 15:04:40.185505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:54.725 [2024-11-17 15:04:40.185514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:54.725 [2024-11-17 15:04:40.185521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:54.725 [2024-11-17 15:04:40.185530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:54.725 [2024-11-17 15:04:40.185541] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:54.725 [2024-11-17 15:04:40.185556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:54.725 [2024-11-17 15:04:40.185565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:54.725 [2024-11-17 15:04:40.185575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:54.725 [2024-11-17 15:04:40.185581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:54.725 [2024-11-17 15:04:40.185591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:54.725 [2024-11-17 15:04:40.185599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:54.725 [2024-11-17 15:04:40.185608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:54.725 [2024-11-17 15:04:40.185615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:54.725 [2024-11-17 15:04:40.185624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:54.725 [2024-11-17 15:04:40.185631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:54.725 [2024-11-17 15:04:40.185642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:54.725 [2024-11-17 15:04:40.185649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:54.725 [2024-11-17 15:04:40.185660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:54.725 [2024-11-17 15:04:40.185667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:54.725 [2024-11-17 15:04:40.185677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:54.725 [2024-11-17 15:04:40.185684] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:54.725 [2024-11-17 15:04:40.185695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:54.725 [2024-11-17 15:04:40.185703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:54.725 [2024-11-17 15:04:40.185713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:54.725 [2024-11-17 15:04:40.185720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:54.725 [2024-11-17 15:04:40.185730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:54.725 [2024-11-17 15:04:40.185738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.725 [2024-11-17 15:04:40.185748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:54.725 [2024-11-17 15:04:40.185755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:27:54.725 [2024-11-17 15:04:40.185764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.725 [2024-11-17 15:04:40.185803] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:54.725 [2024-11-17 15:04:40.185817] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:58.938 [2024-11-17 15:04:44.406184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.938 [2024-11-17 15:04:44.406547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:58.938 [2024-11-17 15:04:44.406672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4220.360 ms 00:27:58.938 [2024-11-17 15:04:44.406705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.938 [2024-11-17 15:04:44.438864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.938 [2024-11-17 15:04:44.439130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:58.938 [2024-11-17 15:04:44.439272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.852 ms 00:27:58.938 [2024-11-17 15:04:44.439304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.938 [2024-11-17 15:04:44.439464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.938 [2024-11-17 15:04:44.439497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:58.938 [2024-11-17 15:04:44.439520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:58.938 [2024-11-17 15:04:44.439548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.938 [2024-11-17 15:04:44.475179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.938 [2024-11-17 15:04:44.475391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:58.938 [2024-11-17 15:04:44.475469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.576 ms 00:27:58.938 [2024-11-17 15:04:44.475497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.938 [2024-11-17 15:04:44.475555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.938 [2024-11-17 15:04:44.475580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:58.938 [2024-11-17 15:04:44.475601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:58.938 [2024-11-17 15:04:44.475622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.938 [2024-11-17 15:04:44.476286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.938 [2024-11-17 15:04:44.476451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:58.938 [2024-11-17 15:04:44.476521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:27:58.938 [2024-11-17 15:04:44.476548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.938 [2024-11-17 15:04:44.476682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.938 [2024-11-17 15:04:44.476710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:58.938 [2024-11-17 15:04:44.476731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:27:58.938 [2024-11-17 15:04:44.476754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.200 [2024-11-17 15:04:44.494280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.200 [2024-11-17 15:04:44.494461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:59.200 [2024-11-17 15:04:44.494532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.495 ms 00:27:59.200 [2024-11-17 15:04:44.494560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.200 [2024-11-17 15:04:44.507949] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:59.200 [2024-11-17 15:04:44.511975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.200 [2024-11-17 15:04:44.512119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:59.200 [2024-11-17 15:04:44.512178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.286 ms 00:27:59.200 [2024-11-17 15:04:44.512201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.200 [2024-11-17 15:04:44.632933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.200 [2024-11-17 15:04:44.633166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:59.201 [2024-11-17 15:04:44.633244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.675 ms 00:27:59.201 [2024-11-17 15:04:44.633270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.201 [2024-11-17 15:04:44.633610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.201 [2024-11-17 15:04:44.633739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:59.201 [2024-11-17 15:04:44.633807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:27:59.201 [2024-11-17 15:04:44.633831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.201 [2024-11-17 15:04:44.660207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.201 [2024-11-17 15:04:44.660381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:59.201 [2024-11-17 15:04:44.660448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.299 ms 00:27:59.201 [2024-11-17 15:04:44.660460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.201 [2024-11-17 15:04:44.685678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.201 [2024-11-17 15:04:44.685728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:59.201 [2024-11-17 15:04:44.685745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.092 ms 00:27:59.201 [2024-11-17 15:04:44.685753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.201 [2024-11-17 15:04:44.686397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.201 [2024-11-17 15:04:44.686417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:59.201 [2024-11-17 15:04:44.686430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:27:59.201 [2024-11-17 15:04:44.686440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.462 [2024-11-17 15:04:44.776866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.462 [2024-11-17 15:04:44.776916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:59.462 [2024-11-17 15:04:44.776956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.359 ms 00:27:59.463 [2024-11-17 15:04:44.776966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.463 [2024-11-17 15:04:44.805019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.463 [2024-11-17 15:04:44.805066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:59.463 [2024-11-17 15:04:44.805083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.960 ms 00:27:59.463 [2024-11-17 15:04:44.805092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.463 [2024-11-17 15:04:44.831738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.463 [2024-11-17 15:04:44.831977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:59.463 [2024-11-17 15:04:44.832005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.586 ms 00:27:59.463 [2024-11-17 15:04:44.832014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.463 [2024-11-17 15:04:44.858637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.463 [2024-11-17 15:04:44.858680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:59.463 [2024-11-17 15:04:44.858696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.535 ms 00:27:59.463 [2024-11-17 15:04:44.858704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.463 [2024-11-17 15:04:44.858762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.463 [2024-11-17 15:04:44.858772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:59.463 [2024-11-17 15:04:44.858787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:59.463 [2024-11-17 15:04:44.858796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.463 [2024-11-17 15:04:44.858895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.463 [2024-11-17 15:04:44.858908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:59.463 [2024-11-17 15:04:44.858945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:59.463 [2024-11-17 15:04:44.858954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.463 [2024-11-17 15:04:44.860184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4691.796 ms, result 0 00:27:59.463 { 00:27:59.463 "name": "ftl0", 00:27:59.463 "uuid": "f8ab1106-8587-4bca-959a-3c4b3782d3c4" 00:27:59.463 } 00:27:59.463 15:04:44 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:27:59.463 15:04:44 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:59.723 15:04:45 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:27:59.723 15:04:45 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:59.986 [2024-11-17 15:04:45.291519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.986 [2024-11-17 15:04:45.291589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:59.986 [2024-11-17 15:04:45.291606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:59.986 [2024-11-17 15:04:45.291623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.986 [2024-11-17 15:04:45.291649] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:59.986 [2024-11-17 15:04:45.294794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.986 [2024-11-17 15:04:45.294998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:59.986 [2024-11-17 15:04:45.295028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.121 ms 00:27:59.986 [2024-11-17 15:04:45.295037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.986 [2024-11-17 15:04:45.295356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.986 [2024-11-17 15:04:45.295371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:59.986 [2024-11-17 15:04:45.295383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:27:59.986 [2024-11-17 15:04:45.295391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.986 [2024-11-17 15:04:45.298645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.986 [2024-11-17 15:04:45.298773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:59.986 [2024-11-17 15:04:45.298794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.236 ms 00:27:59.986 [2024-11-17 15:04:45.298804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.986 [2024-11-17 15:04:45.305064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.986 [2024-11-17 15:04:45.305217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:59.986 [2024-11-17 15:04:45.305248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.228 ms 00:27:59.986 [2024-11-17 15:04:45.305257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.986 [2024-11-17 15:04:45.332547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.986 [2024-11-17 15:04:45.332734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:59.986 [2024-11-17 15:04:45.332760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.205 ms 00:27:59.986 [2024-11-17 15:04:45.332768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.986 [2024-11-17 15:04:45.351323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.986 [2024-11-17 15:04:45.351497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:59.986 [2024-11-17 15:04:45.351525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.499 ms 00:27:59.986 [2024-11-17 15:04:45.351534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.986 [2024-11-17 15:04:45.351703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.986 [2024-11-17 15:04:45.351715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:59.986 [2024-11-17 15:04:45.351728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:27:59.986 [2024-11-17 15:04:45.351736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.986 [2024-11-17 15:04:45.377799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.986 [2024-11-17 15:04:45.377846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:59.986 [2024-11-17 15:04:45.377861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.035 ms 00:27:59.986 [2024-11-17 15:04:45.377869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.986 [2024-11-17 15:04:45.403418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.986 [2024-11-17 15:04:45.403465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:59.986 [2024-11-17 15:04:45.403480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.494 ms 00:27:59.986 [2024-11-17 15:04:45.403487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.986 [2024-11-17 15:04:45.428595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.986 [2024-11-17 15:04:45.428642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:59.986 [2024-11-17 15:04:45.428656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.051 ms 00:27:59.986 [2024-11-17 15:04:45.428663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.986 [2024-11-17 15:04:45.453938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.986 [2024-11-17 15:04:45.453986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:59.986 [2024-11-17 15:04:45.454000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.162 ms 00:27:59.986 [2024-11-17 15:04:45.454008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.986 [2024-11-17 15:04:45.454059] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:59.986 [2024-11-17 15:04:45.454074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:59.987 [2024-11-17 15:04:45.454886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:59.988 [2024-11-17 15:04:45.454894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:59.988 [2024-11-17 15:04:45.454903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:59.988 [2024-11-17 15:04:45.454910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:59.988 [2024-11-17 15:04:45.454945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:59.988 [2024-11-17 15:04:45.454954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:59.988 [2024-11-17 15:04:45.454966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:59.988 [2024-11-17 15:04:45.454974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:59.988 [2024-11-17 15:04:45.454984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:59.988 [2024-11-17 15:04:45.454992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:59.988 [2024-11-17 15:04:45.455001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:59.988 [2024-11-17 15:04:45.455018] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:59.988 [2024-11-17 15:04:45.455029] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8ab1106-8587-4bca-959a-3c4b3782d3c4 00:27:59.988 [2024-11-17 15:04:45.455038] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:59.988 [2024-11-17 15:04:45.455050] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:59.988 [2024-11-17 15:04:45.455074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:59.988 [2024-11-17 15:04:45.455085] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:59.988 [2024-11-17 15:04:45.455092] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:59.988 [2024-11-17 15:04:45.455103] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:59.988 [2024-11-17 15:04:45.455111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:59.988 [2024-11-17 15:04:45.455120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:59.988 [2024-11-17 15:04:45.455126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:59.988 [2024-11-17 15:04:45.455137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.988 [2024-11-17 15:04:45.455145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:59.988 [2024-11-17 15:04:45.455156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:27:59.988 [2024-11-17 15:04:45.455166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.988 [2024-11-17 15:04:45.469190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.988 [2024-11-17 15:04:45.469233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:59.988 [2024-11-17 15:04:45.469247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.978 ms 00:27:59.988 [2024-11-17 15:04:45.469256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.988 [2024-11-17 15:04:45.469661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.988 [2024-11-17 15:04:45.469673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:59.988 [2024-11-17 15:04:45.469687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:27:59.988 [2024-11-17 15:04:45.469696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.988 [2024-11-17 15:04:45.516383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.988 [2024-11-17 15:04:45.516573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:59.988 [2024-11-17 15:04:45.516600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.988 [2024-11-17 15:04:45.516609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.988 [2024-11-17 15:04:45.516690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.988 [2024-11-17 15:04:45.516699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:59.988 [2024-11-17 15:04:45.516713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.988 [2024-11-17 15:04:45.516720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.988 [2024-11-17 15:04:45.516823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.988 [2024-11-17 15:04:45.516834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:59.988 [2024-11-17 15:04:45.516844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.988 [2024-11-17 15:04:45.516852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.988 [2024-11-17 15:04:45.516874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.988 [2024-11-17 15:04:45.516884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:59.988 [2024-11-17 15:04:45.516894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.988 [2024-11-17 15:04:45.516905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.249 [2024-11-17 15:04:45.602870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.249 [2024-11-17 15:04:45.602940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:00.249 [2024-11-17 15:04:45.602957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.249 [2024-11-17 15:04:45.602965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.249 [2024-11-17 15:04:45.672798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.249 [2024-11-17 15:04:45.672857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:00.249 [2024-11-17 15:04:45.672872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.249 [2024-11-17 15:04:45.672884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.249 [2024-11-17 15:04:45.672998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.249 [2024-11-17 15:04:45.673010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:00.249 [2024-11-17 15:04:45.673021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.249 [2024-11-17 15:04:45.673029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.249 [2024-11-17 15:04:45.673107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.249 [2024-11-17 15:04:45.673117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:00.249 [2024-11-17 15:04:45.673128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.249 [2024-11-17 15:04:45.673136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.249 [2024-11-17 15:04:45.673255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.249 [2024-11-17 15:04:45.673268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:00.249 [2024-11-17 15:04:45.673280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.249 [2024-11-17 15:04:45.673288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.249 [2024-11-17 15:04:45.673329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.249 [2024-11-17 15:04:45.673339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:00.249 [2024-11-17 15:04:45.673350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.249 [2024-11-17 15:04:45.673358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.249 [2024-11-17 15:04:45.673406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.249 [2024-11-17 15:04:45.673415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:00.249 [2024-11-17 15:04:45.673426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.249 [2024-11-17 15:04:45.673434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.249 [2024-11-17 15:04:45.673486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.249 [2024-11-17 15:04:45.673497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:00.249 [2024-11-17 15:04:45.673508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.249 [2024-11-17 15:04:45.673516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.249 [2024-11-17 15:04:45.673667] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.111 ms, result 0 00:28:00.249 true 00:28:00.249 15:04:45 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 80848 00:28:00.249 15:04:45 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 80848 ']' 00:28:00.249 15:04:45 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 80848 00:28:00.249 15:04:45 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:28:00.249 15:04:45 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:00.249 15:04:45 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80848 00:28:00.249 15:04:45 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:00.249 15:04:45 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:00.249 15:04:45 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80848' 00:28:00.249 killing process with pid 80848 00:28:00.249 15:04:45 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 80848 00:28:00.249 15:04:45 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 80848 00:28:06.891 15:04:51 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:10.195 262144+0 records in 00:28:10.195 262144+0 records out 00:28:10.195 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.90697 s, 275 MB/s 00:28:10.195 15:04:55 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:12.109 15:04:57 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:12.370 [2024-11-17 15:04:57.704999] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:12.370 [2024-11-17 15:04:57.705145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81081 ] 00:28:12.370 [2024-11-17 15:04:57.864554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.632 [2024-11-17 15:04:57.990585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.893 [2024-11-17 15:04:58.278584] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:12.893 [2024-11-17 15:04:58.278945] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:13.157 [2024-11-17 15:04:58.440186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.157 [2024-11-17 15:04:58.440246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:13.157 [2024-11-17 15:04:58.440267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:13.157 [2024-11-17 15:04:58.440276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.157 [2024-11-17 15:04:58.440331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.157 [2024-11-17 15:04:58.440343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:13.157 [2024-11-17 15:04:58.440355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:13.157 [2024-11-17 15:04:58.440362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.157 [2024-11-17 15:04:58.440383] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:13.157 [2024-11-17 15:04:58.441157] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:13.157 [2024-11-17 15:04:58.441178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.157 [2024-11-17 15:04:58.441187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:13.157 [2024-11-17 15:04:58.441196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:28:13.157 [2024-11-17 15:04:58.441204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.158 [2024-11-17 15:04:58.442962] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:13.158 [2024-11-17 15:04:58.457480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.158 [2024-11-17 15:04:58.457531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:13.158 [2024-11-17 15:04:58.457545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.520 ms 00:28:13.158 [2024-11-17 15:04:58.457554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.158 [2024-11-17 15:04:58.457638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.158 [2024-11-17 15:04:58.457649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:13.158 [2024-11-17 15:04:58.457658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:13.158 [2024-11-17 15:04:58.457665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.158 [2024-11-17 15:04:58.465969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.158 [2024-11-17 15:04:58.466010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:13.158 [2024-11-17 15:04:58.466021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.225 ms 00:28:13.158 [2024-11-17 15:04:58.466029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.158 [2024-11-17 15:04:58.466116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.158 [2024-11-17 15:04:58.466125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:13.158 [2024-11-17 15:04:58.466134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:13.158 [2024-11-17 15:04:58.466141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.158 [2024-11-17 15:04:58.466185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.158 [2024-11-17 15:04:58.466196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:13.158 [2024-11-17 15:04:58.466205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:13.158 [2024-11-17 15:04:58.466213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.158 [2024-11-17 15:04:58.466236] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:13.158 [2024-11-17 15:04:58.470214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.158 [2024-11-17 15:04:58.470256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:13.158 [2024-11-17 15:04:58.470267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.983 ms 00:28:13.158 [2024-11-17 15:04:58.470278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.158 [2024-11-17 15:04:58.470316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.158 [2024-11-17 15:04:58.470324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:13.158 [2024-11-17 15:04:58.470334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:13.158 [2024-11-17 15:04:58.470341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.158 [2024-11-17 15:04:58.470394] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:13.158 [2024-11-17 15:04:58.470416] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:13.158 [2024-11-17 15:04:58.470454] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:13.158 [2024-11-17 15:04:58.470473] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:13.158 [2024-11-17 15:04:58.470578] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:13.158 [2024-11-17 15:04:58.470590] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:13.158 [2024-11-17 15:04:58.470600] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:13.158 [2024-11-17 15:04:58.470611] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:13.158 [2024-11-17 15:04:58.470620] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:13.158 [2024-11-17 15:04:58.470629] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:13.158 [2024-11-17 15:04:58.470636] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:13.158 [2024-11-17 15:04:58.470644] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:13.158 [2024-11-17 15:04:58.470651] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:13.158 [2024-11-17 15:04:58.470662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.158 [2024-11-17 15:04:58.470670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:13.158 [2024-11-17 15:04:58.470678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:28:13.158 [2024-11-17 15:04:58.470686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.158 [2024-11-17 15:04:58.470769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.158 [2024-11-17 15:04:58.470777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:13.158 [2024-11-17 15:04:58.470785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:13.158 [2024-11-17 15:04:58.470793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.158 [2024-11-17 15:04:58.470897] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:13.158 [2024-11-17 15:04:58.470910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:13.158 [2024-11-17 15:04:58.470943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:13.158 [2024-11-17 15:04:58.470952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.158 [2024-11-17 15:04:58.470961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:13.158 [2024-11-17 15:04:58.470968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:13.158 [2024-11-17 15:04:58.470977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:13.158 [2024-11-17 15:04:58.470985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:13.158 [2024-11-17 15:04:58.470993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:13.158 [2024-11-17 15:04:58.471000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:13.158 [2024-11-17 15:04:58.471007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:13.158 [2024-11-17 15:04:58.471014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:13.158 [2024-11-17 15:04:58.471021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:13.158 [2024-11-17 15:04:58.471031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:13.158 [2024-11-17 15:04:58.471038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:13.158 [2024-11-17 15:04:58.471053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.158 [2024-11-17 15:04:58.471061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:13.158 [2024-11-17 15:04:58.471068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:13.158 [2024-11-17 15:04:58.471075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.158 [2024-11-17 15:04:58.471081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:13.158 [2024-11-17 15:04:58.471089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:13.158 [2024-11-17 15:04:58.471095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.158 [2024-11-17 15:04:58.471101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:13.158 [2024-11-17 15:04:58.471108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:13.158 [2024-11-17 15:04:58.471116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.158 [2024-11-17 15:04:58.471123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:13.158 [2024-11-17 15:04:58.471130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:13.158 [2024-11-17 15:04:58.471137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.158 [2024-11-17 15:04:58.471144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:13.158 [2024-11-17 15:04:58.471151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:13.158 [2024-11-17 15:04:58.471158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.158 [2024-11-17 15:04:58.471165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:13.158 [2024-11-17 15:04:58.471172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:13.158 [2024-11-17 15:04:58.471179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:13.158 [2024-11-17 15:04:58.471186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:13.158 [2024-11-17 15:04:58.471193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:13.158 [2024-11-17 15:04:58.471200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:13.158 [2024-11-17 15:04:58.471206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:13.158 [2024-11-17 15:04:58.471213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:13.158 [2024-11-17 15:04:58.471221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.158 [2024-11-17 15:04:58.471227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:13.159 [2024-11-17 15:04:58.471234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:13.159 [2024-11-17 15:04:58.471240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.159 [2024-11-17 15:04:58.471247] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:13.159 [2024-11-17 15:04:58.471256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:13.159 [2024-11-17 15:04:58.471266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:13.159 [2024-11-17 15:04:58.471273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.159 [2024-11-17 15:04:58.471281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:13.159 [2024-11-17 15:04:58.471288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:13.159 [2024-11-17 15:04:58.471295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:13.159 [2024-11-17 15:04:58.471302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:13.159 [2024-11-17 15:04:58.471308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:13.159 [2024-11-17 15:04:58.471315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:13.159 [2024-11-17 15:04:58.471323] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:13.159 [2024-11-17 15:04:58.471333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.159 [2024-11-17 15:04:58.471341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:13.159 [2024-11-17 15:04:58.471348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:13.159 [2024-11-17 15:04:58.471356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:13.159 [2024-11-17 15:04:58.471364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:13.159 [2024-11-17 15:04:58.471371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:13.159 [2024-11-17 15:04:58.471378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:13.159 [2024-11-17 15:04:58.471385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:13.159 [2024-11-17 15:04:58.471391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:13.159 [2024-11-17 15:04:58.471399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:13.159 [2024-11-17 15:04:58.471406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:13.159 [2024-11-17 15:04:58.471413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:13.159 [2024-11-17 15:04:58.471420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:13.159 [2024-11-17 15:04:58.471427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:13.159 [2024-11-17 15:04:58.471434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:13.159 [2024-11-17 15:04:58.471441] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:13.159 [2024-11-17 15:04:58.471453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.159 [2024-11-17 15:04:58.471460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:13.159 [2024-11-17 15:04:58.471468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:13.159 [2024-11-17 15:04:58.471475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:13.159 [2024-11-17 15:04:58.471483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:13.159 [2024-11-17 15:04:58.471490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.471498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:13.159 [2024-11-17 15:04:58.471507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:28:13.159 [2024-11-17 15:04:58.471514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.159 [2024-11-17 15:04:58.503675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.503727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:13.159 [2024-11-17 15:04:58.503740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.114 ms 00:28:13.159 [2024-11-17 15:04:58.503749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.159 [2024-11-17 15:04:58.503868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.503877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:13.159 [2024-11-17 15:04:58.503887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:13.159 [2024-11-17 15:04:58.503896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.159 [2024-11-17 15:04:58.552350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.552555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:13.159 [2024-11-17 15:04:58.552578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.367 ms 00:28:13.159 [2024-11-17 15:04:58.552587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.159 [2024-11-17 15:04:58.552640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.552651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:13.159 [2024-11-17 15:04:58.552661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:13.159 [2024-11-17 15:04:58.552675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.159 [2024-11-17 15:04:58.553309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.553334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:13.159 [2024-11-17 15:04:58.553345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:28:13.159 [2024-11-17 15:04:58.553354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.159 [2024-11-17 15:04:58.553510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.553528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:13.159 [2024-11-17 15:04:58.553537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:28:13.159 [2024-11-17 15:04:58.553550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.159 [2024-11-17 15:04:58.569345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.569390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:13.159 [2024-11-17 15:04:58.569404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.773 ms 00:28:13.159 [2024-11-17 15:04:58.569412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.159 [2024-11-17 15:04:58.583900] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:13.159 [2024-11-17 15:04:58.583960] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:13.159 [2024-11-17 15:04:58.583975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.583983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:13.159 [2024-11-17 15:04:58.583992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.460 ms 00:28:13.159 [2024-11-17 15:04:58.583999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.159 [2024-11-17 15:04:58.610188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.610236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:13.159 [2024-11-17 15:04:58.610255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.134 ms 00:28:13.159 [2024-11-17 15:04:58.610263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.159 [2024-11-17 15:04:58.623265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.623454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:13.159 [2024-11-17 15:04:58.623475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.971 ms 00:28:13.159 [2024-11-17 15:04:58.623483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.159 [2024-11-17 15:04:58.636042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.636088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:13.159 [2024-11-17 15:04:58.636101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.520 ms 00:28:13.159 [2024-11-17 15:04:58.636109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.159 [2024-11-17 15:04:58.636762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.159 [2024-11-17 15:04:58.636788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:13.159 [2024-11-17 15:04:58.636799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:28:13.160 [2024-11-17 15:04:58.636807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.422 [2024-11-17 15:04:58.702829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.422 [2024-11-17 15:04:58.703115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:13.422 [2024-11-17 15:04:58.703141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.999 ms 00:28:13.422 [2024-11-17 15:04:58.703159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.422 [2024-11-17 15:04:58.714198] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:13.422 [2024-11-17 15:04:58.717382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.422 [2024-11-17 15:04:58.717427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:13.422 [2024-11-17 15:04:58.717440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.088 ms 00:28:13.422 [2024-11-17 15:04:58.717448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.422 [2024-11-17 15:04:58.717536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.422 [2024-11-17 15:04:58.717548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:13.422 [2024-11-17 15:04:58.717557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:13.422 [2024-11-17 15:04:58.717566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.422 [2024-11-17 15:04:58.717639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.422 [2024-11-17 15:04:58.717650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:13.422 [2024-11-17 15:04:58.717659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:13.422 [2024-11-17 15:04:58.717667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.422 [2024-11-17 15:04:58.717688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.422 [2024-11-17 15:04:58.717698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:13.422 [2024-11-17 15:04:58.717707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:13.422 [2024-11-17 15:04:58.717715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.422 [2024-11-17 15:04:58.717756] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:13.422 [2024-11-17 15:04:58.717768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.422 [2024-11-17 15:04:58.717780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:13.422 [2024-11-17 15:04:58.717790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:13.422 [2024-11-17 15:04:58.717799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.422 [2024-11-17 15:04:58.743643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.422 [2024-11-17 15:04:58.743694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:13.422 [2024-11-17 15:04:58.743708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.824 ms 00:28:13.422 [2024-11-17 15:04:58.743717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.422 [2024-11-17 15:04:58.743814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.422 [2024-11-17 15:04:58.743825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:13.422 [2024-11-17 15:04:58.743834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:13.422 [2024-11-17 15:04:58.743842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.422 [2024-11-17 15:04:58.745157] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.445 ms, result 0 00:28:14.367  [2024-11-17T15:05:00.854Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-17T15:05:01.798Z] Copying: 41/1024 [MB] (20 MBps) [2024-11-17T15:05:03.186Z] Copying: 69/1024 [MB] (28 MBps) [2024-11-17T15:05:03.759Z] Copying: 95/1024 [MB] (26 MBps) [2024-11-17T15:05:05.145Z] Copying: 109/1024 [MB] (13 MBps) [2024-11-17T15:05:06.089Z] Copying: 119/1024 [MB] (10 MBps) [2024-11-17T15:05:07.032Z] Copying: 142/1024 [MB] (23 MBps) [2024-11-17T15:05:07.991Z] Copying: 167/1024 [MB] (24 MBps) [2024-11-17T15:05:08.938Z] Copying: 182/1024 [MB] (15 MBps) [2024-11-17T15:05:09.880Z] Copying: 199/1024 [MB] (16 MBps) [2024-11-17T15:05:10.825Z] Copying: 214/1024 [MB] (15 MBps) [2024-11-17T15:05:11.772Z] Copying: 227/1024 [MB] (12 MBps) [2024-11-17T15:05:13.160Z] Copying: 247/1024 [MB] (20 MBps) [2024-11-17T15:05:14.102Z] Copying: 269/1024 [MB] (21 MBps) [2024-11-17T15:05:15.047Z] Copying: 289/1024 [MB] (20 MBps) [2024-11-17T15:05:15.992Z] Copying: 307/1024 [MB] (18 MBps) [2024-11-17T15:05:16.939Z] Copying: 328/1024 [MB] (20 MBps) [2024-11-17T15:05:17.884Z] Copying: 345/1024 [MB] (17 MBps) [2024-11-17T15:05:18.827Z] Copying: 358/1024 [MB] (12 MBps) [2024-11-17T15:05:19.787Z] Copying: 380/1024 [MB] (22 MBps) [2024-11-17T15:05:21.174Z] Copying: 398/1024 [MB] (18 MBps) [2024-11-17T15:05:22.119Z] Copying: 437/1024 [MB] (39 MBps) [2024-11-17T15:05:23.060Z] Copying: 476/1024 [MB] (38 MBps) [2024-11-17T15:05:24.011Z] Copying: 508/1024 [MB] (31 MBps) [2024-11-17T15:05:25.014Z] Copying: 519/1024 [MB] (11 MBps) [2024-11-17T15:05:25.957Z] Copying: 550/1024 [MB] (30 MBps) [2024-11-17T15:05:26.903Z] Copying: 589/1024 [MB] (39 MBps) [2024-11-17T15:05:27.848Z] Copying: 626/1024 [MB] (36 MBps) [2024-11-17T15:05:28.793Z] Copying: 648/1024 [MB] (21 MBps) [2024-11-17T15:05:30.181Z] Copying: 662/1024 [MB] (14 MBps) [2024-11-17T15:05:31.125Z] Copying: 675/1024 [MB] (12 MBps) [2024-11-17T15:05:32.071Z] Copying: 697/1024 [MB] (22 MBps) [2024-11-17T15:05:33.014Z] Copying: 716/1024 [MB] (18 MBps) [2024-11-17T15:05:33.959Z] Copying: 736/1024 [MB] (20 MBps) [2024-11-17T15:05:34.903Z] Copying: 751/1024 [MB] (15 MBps) [2024-11-17T15:05:35.847Z] Copying: 762/1024 [MB] (10 MBps) [2024-11-17T15:05:36.791Z] Copying: 779/1024 [MB] (16 MBps) [2024-11-17T15:05:38.180Z] Copying: 803/1024 [MB] (23 MBps) [2024-11-17T15:05:39.124Z] Copying: 837/1024 [MB] (34 MBps) [2024-11-17T15:05:40.083Z] Copying: 874/1024 [MB] (36 MBps) [2024-11-17T15:05:41.027Z] Copying: 894/1024 [MB] (20 MBps) [2024-11-17T15:05:41.970Z] Copying: 915/1024 [MB] (20 MBps) [2024-11-17T15:05:42.915Z] Copying: 936/1024 [MB] (20 MBps) [2024-11-17T15:05:43.858Z] Copying: 959/1024 [MB] (23 MBps) [2024-11-17T15:05:44.802Z] Copying: 976/1024 [MB] (16 MBps) [2024-11-17T15:05:45.748Z] Copying: 1013/1024 [MB] (37 MBps) [2024-11-17T15:05:45.748Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-17 15:05:45.694191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.205 [2024-11-17 15:05:45.694236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:00.205 [2024-11-17 15:05:45.694249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:00.205 [2024-11-17 15:05:45.694256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.205 [2024-11-17 15:05:45.694274] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:00.205 [2024-11-17 15:05:45.696658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.205 [2024-11-17 15:05:45.696696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:00.205 [2024-11-17 15:05:45.696705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.370 ms 00:29:00.205 [2024-11-17 15:05:45.696713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.205 [2024-11-17 15:05:45.698803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.205 [2024-11-17 15:05:45.698841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:00.205 [2024-11-17 15:05:45.698855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:29:00.205 [2024-11-17 15:05:45.698862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.205 [2024-11-17 15:05:45.698894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.205 [2024-11-17 15:05:45.698901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:00.205 [2024-11-17 15:05:45.698908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:00.205 [2024-11-17 15:05:45.698914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.205 [2024-11-17 15:05:45.698971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.205 [2024-11-17 15:05:45.698981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:00.205 [2024-11-17 15:05:45.698988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:00.205 [2024-11-17 15:05:45.698993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.205 [2024-11-17 15:05:45.699004] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:00.205 [2024-11-17 15:05:45.699013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:00.205 [2024-11-17 15:05:45.699287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:00.206 [2024-11-17 15:05:45.699621] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:00.206 [2024-11-17 15:05:45.699627] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8ab1106-8587-4bca-959a-3c4b3782d3c4 00:29:00.206 [2024-11-17 15:05:45.699634] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:00.206 [2024-11-17 15:05:45.699640] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:00.206 [2024-11-17 15:05:45.699645] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:00.206 [2024-11-17 15:05:45.699652] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:00.206 [2024-11-17 15:05:45.699660] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:00.206 [2024-11-17 15:05:45.699666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:00.206 [2024-11-17 15:05:45.699672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:00.206 [2024-11-17 15:05:45.699677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:00.206 [2024-11-17 15:05:45.699682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:00.206 [2024-11-17 15:05:45.699687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.206 [2024-11-17 15:05:45.699693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:00.206 [2024-11-17 15:05:45.699700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:29:00.206 [2024-11-17 15:05:45.699706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.206 [2024-11-17 15:05:45.709796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.206 [2024-11-17 15:05:45.709831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:00.206 [2024-11-17 15:05:45.709845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.078 ms 00:29:00.206 [2024-11-17 15:05:45.709851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.206 [2024-11-17 15:05:45.710162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.206 [2024-11-17 15:05:45.710176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:00.206 [2024-11-17 15:05:45.710183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:29:00.206 [2024-11-17 15:05:45.710189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.206 [2024-11-17 15:05:45.737455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.206 [2024-11-17 15:05:45.737492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:00.206 [2024-11-17 15:05:45.737500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.206 [2024-11-17 15:05:45.737506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.206 [2024-11-17 15:05:45.737554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.206 [2024-11-17 15:05:45.737560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:00.207 [2024-11-17 15:05:45.737566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.207 [2024-11-17 15:05:45.737572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.207 [2024-11-17 15:05:45.737610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.207 [2024-11-17 15:05:45.737618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:00.207 [2024-11-17 15:05:45.737627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.207 [2024-11-17 15:05:45.737633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.207 [2024-11-17 15:05:45.737645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.207 [2024-11-17 15:05:45.737650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:00.207 [2024-11-17 15:05:45.737656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.207 [2024-11-17 15:05:45.737664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.468 [2024-11-17 15:05:45.796990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.468 [2024-11-17 15:05:45.797027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:00.468 [2024-11-17 15:05:45.797039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.468 [2024-11-17 15:05:45.797045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.468 [2024-11-17 15:05:45.845623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.468 [2024-11-17 15:05:45.845658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:00.468 [2024-11-17 15:05:45.845667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.468 [2024-11-17 15:05:45.845673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.468 [2024-11-17 15:05:45.845729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.468 [2024-11-17 15:05:45.845736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:00.468 [2024-11-17 15:05:45.845743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.468 [2024-11-17 15:05:45.845751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.468 [2024-11-17 15:05:45.845778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.468 [2024-11-17 15:05:45.845785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:00.468 [2024-11-17 15:05:45.845791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.468 [2024-11-17 15:05:45.845797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.468 [2024-11-17 15:05:45.845849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.468 [2024-11-17 15:05:45.845856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:00.468 [2024-11-17 15:05:45.845862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.468 [2024-11-17 15:05:45.845868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.468 [2024-11-17 15:05:45.845893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.468 [2024-11-17 15:05:45.845900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:00.468 [2024-11-17 15:05:45.845906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.468 [2024-11-17 15:05:45.845912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.469 [2024-11-17 15:05:45.845955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.469 [2024-11-17 15:05:45.845963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:00.469 [2024-11-17 15:05:45.845968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.469 [2024-11-17 15:05:45.845974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.469 [2024-11-17 15:05:45.846010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.469 [2024-11-17 15:05:45.846017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:00.469 [2024-11-17 15:05:45.846023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.469 [2024-11-17 15:05:45.846030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.469 [2024-11-17 15:05:45.846118] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 151.904 ms, result 0 00:29:01.412 00:29:01.412 00:29:01.412 15:05:46 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:01.412 [2024-11-17 15:05:46.793956] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:01.412 [2024-11-17 15:05:46.794077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81577 ] 00:29:01.412 [2024-11-17 15:05:46.950123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.674 [2024-11-17 15:05:47.029812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.936 [2024-11-17 15:05:47.235311] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:01.936 [2024-11-17 15:05:47.235355] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:01.936 [2024-11-17 15:05:47.386561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.936 [2024-11-17 15:05:47.386596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:01.936 [2024-11-17 15:05:47.386609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:01.936 [2024-11-17 15:05:47.386616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.936 [2024-11-17 15:05:47.386651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.936 [2024-11-17 15:05:47.386658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:01.936 [2024-11-17 15:05:47.386666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:01.936 [2024-11-17 15:05:47.386673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.936 [2024-11-17 15:05:47.386685] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:01.936 [2024-11-17 15:05:47.387205] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:01.936 [2024-11-17 15:05:47.387223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.936 [2024-11-17 15:05:47.387229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:01.936 [2024-11-17 15:05:47.387236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:29:01.936 [2024-11-17 15:05:47.387241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.936 [2024-11-17 15:05:47.387466] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:01.936 [2024-11-17 15:05:47.387490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.936 [2024-11-17 15:05:47.387498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:01.936 [2024-11-17 15:05:47.387507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:01.936 [2024-11-17 15:05:47.387512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.936 [2024-11-17 15:05:47.387566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.936 [2024-11-17 15:05:47.387574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:01.936 [2024-11-17 15:05:47.387580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:01.936 [2024-11-17 15:05:47.387585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.936 [2024-11-17 15:05:47.387784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.937 [2024-11-17 15:05:47.387800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:01.937 [2024-11-17 15:05:47.387807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:29:01.937 [2024-11-17 15:05:47.387812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.937 [2024-11-17 15:05:47.387859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.937 [2024-11-17 15:05:47.387865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:01.937 [2024-11-17 15:05:47.387879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:01.937 [2024-11-17 15:05:47.387885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.937 [2024-11-17 15:05:47.387900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.937 [2024-11-17 15:05:47.387907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:01.937 [2024-11-17 15:05:47.387913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:01.937 [2024-11-17 15:05:47.387935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.937 [2024-11-17 15:05:47.387948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:01.937 [2024-11-17 15:05:47.390742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.937 [2024-11-17 15:05:47.390768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:01.937 [2024-11-17 15:05:47.390775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.797 ms 00:29:01.937 [2024-11-17 15:05:47.390781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.937 [2024-11-17 15:05:47.390806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.937 [2024-11-17 15:05:47.390813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:01.937 [2024-11-17 15:05:47.390819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:01.937 [2024-11-17 15:05:47.390825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.937 [2024-11-17 15:05:47.390853] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:01.937 [2024-11-17 15:05:47.390869] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:01.937 [2024-11-17 15:05:47.390897] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:01.937 [2024-11-17 15:05:47.390908] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:01.937 [2024-11-17 15:05:47.390996] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:01.937 [2024-11-17 15:05:47.391005] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:01.937 [2024-11-17 15:05:47.391013] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:01.937 [2024-11-17 15:05:47.391020] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:01.937 [2024-11-17 15:05:47.391027] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:01.937 [2024-11-17 15:05:47.391033] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:01.937 [2024-11-17 15:05:47.391040] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:01.937 [2024-11-17 15:05:47.391046] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:01.937 [2024-11-17 15:05:47.391051] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:01.937 [2024-11-17 15:05:47.391057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.937 [2024-11-17 15:05:47.391063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:01.937 [2024-11-17 15:05:47.391069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:29:01.937 [2024-11-17 15:05:47.391074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.937 [2024-11-17 15:05:47.391140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.937 [2024-11-17 15:05:47.391145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:01.937 [2024-11-17 15:05:47.391151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:01.937 [2024-11-17 15:05:47.391158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.937 [2024-11-17 15:05:47.391233] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:01.937 [2024-11-17 15:05:47.391247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:01.937 [2024-11-17 15:05:47.391253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:01.937 [2024-11-17 15:05:47.391258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:01.937 [2024-11-17 15:05:47.391270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:01.937 [2024-11-17 15:05:47.391282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:01.937 [2024-11-17 15:05:47.391287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:01.937 [2024-11-17 15:05:47.391297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:01.937 [2024-11-17 15:05:47.391303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:01.937 [2024-11-17 15:05:47.391307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:01.937 [2024-11-17 15:05:47.391313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:01.937 [2024-11-17 15:05:47.391318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:01.937 [2024-11-17 15:05:47.391322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:01.937 [2024-11-17 15:05:47.391336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:01.937 [2024-11-17 15:05:47.391341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:01.937 [2024-11-17 15:05:47.391350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:01.937 [2024-11-17 15:05:47.391360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:01.937 [2024-11-17 15:05:47.391365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:01.937 [2024-11-17 15:05:47.391375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:01.937 [2024-11-17 15:05:47.391380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:01.937 [2024-11-17 15:05:47.391390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:01.937 [2024-11-17 15:05:47.391395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:01.937 [2024-11-17 15:05:47.391404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:01.937 [2024-11-17 15:05:47.391409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:01.937 [2024-11-17 15:05:47.391418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:01.937 [2024-11-17 15:05:47.391423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:01.937 [2024-11-17 15:05:47.391429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:01.937 [2024-11-17 15:05:47.391435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:01.937 [2024-11-17 15:05:47.391440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:01.937 [2024-11-17 15:05:47.391444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:01.937 [2024-11-17 15:05:47.391454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:01.937 [2024-11-17 15:05:47.391459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391464] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:01.937 [2024-11-17 15:05:47.391470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:01.937 [2024-11-17 15:05:47.391476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:01.937 [2024-11-17 15:05:47.391481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:01.937 [2024-11-17 15:05:47.391487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:01.937 [2024-11-17 15:05:47.391492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:01.937 [2024-11-17 15:05:47.391496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:01.937 [2024-11-17 15:05:47.391501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:01.937 [2024-11-17 15:05:47.391506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:01.938 [2024-11-17 15:05:47.391511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:01.938 [2024-11-17 15:05:47.391516] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:01.938 [2024-11-17 15:05:47.391525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:01.938 [2024-11-17 15:05:47.391531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:01.938 [2024-11-17 15:05:47.391536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:01.938 [2024-11-17 15:05:47.391541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:01.938 [2024-11-17 15:05:47.391547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:01.938 [2024-11-17 15:05:47.391552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:01.938 [2024-11-17 15:05:47.391557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:01.938 [2024-11-17 15:05:47.391562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:01.938 [2024-11-17 15:05:47.391567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:01.938 [2024-11-17 15:05:47.391573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:01.938 [2024-11-17 15:05:47.391578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:01.938 [2024-11-17 15:05:47.391583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:01.938 [2024-11-17 15:05:47.391588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:01.938 [2024-11-17 15:05:47.391594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:01.938 [2024-11-17 15:05:47.391600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:01.938 [2024-11-17 15:05:47.391605] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:01.938 [2024-11-17 15:05:47.391611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:01.938 [2024-11-17 15:05:47.391617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:01.938 [2024-11-17 15:05:47.391623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:01.938 [2024-11-17 15:05:47.391629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:01.938 [2024-11-17 15:05:47.391634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:01.938 [2024-11-17 15:05:47.391640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.391645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:01.938 [2024-11-17 15:05:47.391651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:29:01.938 [2024-11-17 15:05:47.391656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.410293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.410322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:01.938 [2024-11-17 15:05:47.410329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.607 ms 00:29:01.938 [2024-11-17 15:05:47.410335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.410399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.410405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:01.938 [2024-11-17 15:05:47.410411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:01.938 [2024-11-17 15:05:47.410419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.445267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.445301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:01.938 [2024-11-17 15:05:47.445311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.811 ms 00:29:01.938 [2024-11-17 15:05:47.445317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.445350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.445358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:01.938 [2024-11-17 15:05:47.445365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:01.938 [2024-11-17 15:05:47.445371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.445442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.445450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:01.938 [2024-11-17 15:05:47.445457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:01.938 [2024-11-17 15:05:47.445463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.445552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.445562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:01.938 [2024-11-17 15:05:47.445568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:29:01.938 [2024-11-17 15:05:47.445573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.456184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.456214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:01.938 [2024-11-17 15:05:47.456222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.596 ms 00:29:01.938 [2024-11-17 15:05:47.456228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.456311] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:01.938 [2024-11-17 15:05:47.456320] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:01.938 [2024-11-17 15:05:47.456327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.456333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:01.938 [2024-11-17 15:05:47.456341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:01.938 [2024-11-17 15:05:47.456346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.465471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.465496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:01.938 [2024-11-17 15:05:47.465503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.112 ms 00:29:01.938 [2024-11-17 15:05:47.465509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.465600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.465607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:01.938 [2024-11-17 15:05:47.465613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:29:01.938 [2024-11-17 15:05:47.465619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.465646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.465657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:01.938 [2024-11-17 15:05:47.465664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:01.938 [2024-11-17 15:05:47.465670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.466126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.466142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:01.938 [2024-11-17 15:05:47.466148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:29:01.938 [2024-11-17 15:05:47.466153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.466165] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:01.938 [2024-11-17 15:05:47.466174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.466180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:01.938 [2024-11-17 15:05:47.466186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:01.938 [2024-11-17 15:05:47.466191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.474667] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:01.938 [2024-11-17 15:05:47.474775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.474783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:01.938 [2024-11-17 15:05:47.474790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.571 ms 00:29:01.938 [2024-11-17 15:05:47.474795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.938 [2024-11-17 15:05:47.476425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.938 [2024-11-17 15:05:47.476449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:01.939 [2024-11-17 15:05:47.476458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:29:01.939 [2024-11-17 15:05:47.476463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.939 [2024-11-17 15:05:47.476522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.939 [2024-11-17 15:05:47.476531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:01.939 [2024-11-17 15:05:47.476537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:01.939 [2024-11-17 15:05:47.476542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.939 [2024-11-17 15:05:47.476558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.939 [2024-11-17 15:05:47.476566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:01.939 [2024-11-17 15:05:47.476574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:01.939 [2024-11-17 15:05:47.476580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.939 [2024-11-17 15:05:47.476600] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:01.939 [2024-11-17 15:05:47.476607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.939 [2024-11-17 15:05:47.476613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:01.939 [2024-11-17 15:05:47.476619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:01.939 [2024-11-17 15:05:47.476624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.200 [2024-11-17 15:05:47.494531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.200 [2024-11-17 15:05:47.494565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:02.200 [2024-11-17 15:05:47.494573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.891 ms 00:29:02.200 [2024-11-17 15:05:47.494579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.200 [2024-11-17 15:05:47.494633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.200 [2024-11-17 15:05:47.494640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:02.200 [2024-11-17 15:05:47.494647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:02.200 [2024-11-17 15:05:47.494652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.200 [2024-11-17 15:05:47.495348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.471 ms, result 0 00:29:03.143  [2024-11-17T15:05:50.073Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-17T15:05:50.645Z] Copying: 50/1024 [MB] (27 MBps) [2024-11-17T15:05:52.033Z] Copying: 63/1024 [MB] (12 MBps) [2024-11-17T15:05:52.978Z] Copying: 73/1024 [MB] (10 MBps) [2024-11-17T15:05:53.924Z] Copying: 89/1024 [MB] (16 MBps) [2024-11-17T15:05:54.869Z] Copying: 108/1024 [MB] (18 MBps) [2024-11-17T15:05:55.896Z] Copying: 120/1024 [MB] (11 MBps) [2024-11-17T15:05:56.841Z] Copying: 132/1024 [MB] (12 MBps) [2024-11-17T15:05:57.787Z] Copying: 143/1024 [MB] (10 MBps) [2024-11-17T15:05:58.730Z] Copying: 155/1024 [MB] (11 MBps) [2024-11-17T15:05:59.677Z] Copying: 173/1024 [MB] (17 MBps) [2024-11-17T15:06:01.064Z] Copying: 186/1024 [MB] (12 MBps) [2024-11-17T15:06:01.635Z] Copying: 201/1024 [MB] (15 MBps) [2024-11-17T15:06:03.022Z] Copying: 224/1024 [MB] (23 MBps) [2024-11-17T15:06:03.967Z] Copying: 254/1024 [MB] (30 MBps) [2024-11-17T15:06:04.910Z] Copying: 275/1024 [MB] (21 MBps) [2024-11-17T15:06:05.856Z] Copying: 296/1024 [MB] (21 MBps) [2024-11-17T15:06:06.801Z] Copying: 321/1024 [MB] (25 MBps) [2024-11-17T15:06:07.743Z] Copying: 342/1024 [MB] (20 MBps) [2024-11-17T15:06:08.688Z] Copying: 367/1024 [MB] (24 MBps) [2024-11-17T15:06:10.074Z] Copying: 384/1024 [MB] (16 MBps) [2024-11-17T15:06:10.646Z] Copying: 403/1024 [MB] (19 MBps) [2024-11-17T15:06:12.033Z] Copying: 421/1024 [MB] (17 MBps) [2024-11-17T15:06:12.979Z] Copying: 431/1024 [MB] (10 MBps) [2024-11-17T15:06:13.921Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-17T15:06:14.867Z] Copying: 456/1024 [MB] (14 MBps) [2024-11-17T15:06:15.812Z] Copying: 471/1024 [MB] (14 MBps) [2024-11-17T15:06:16.756Z] Copying: 488/1024 [MB] (17 MBps) [2024-11-17T15:06:17.701Z] Copying: 503/1024 [MB] (15 MBps) [2024-11-17T15:06:18.644Z] Copying: 514/1024 [MB] (11 MBps) [2024-11-17T15:06:20.030Z] Copying: 527/1024 [MB] (12 MBps) [2024-11-17T15:06:20.974Z] Copying: 543/1024 [MB] (16 MBps) [2024-11-17T15:06:21.918Z] Copying: 561/1024 [MB] (18 MBps) [2024-11-17T15:06:22.863Z] Copying: 581/1024 [MB] (19 MBps) [2024-11-17T15:06:23.808Z] Copying: 610/1024 [MB] (29 MBps) [2024-11-17T15:06:24.752Z] Copying: 628/1024 [MB] (17 MBps) [2024-11-17T15:06:25.696Z] Copying: 650/1024 [MB] (21 MBps) [2024-11-17T15:06:26.643Z] Copying: 673/1024 [MB] (23 MBps) [2024-11-17T15:06:27.648Z] Copying: 693/1024 [MB] (20 MBps) [2024-11-17T15:06:29.037Z] Copying: 715/1024 [MB] (21 MBps) [2024-11-17T15:06:29.981Z] Copying: 735/1024 [MB] (20 MBps) [2024-11-17T15:06:30.926Z] Copying: 753/1024 [MB] (18 MBps) [2024-11-17T15:06:31.871Z] Copying: 770/1024 [MB] (16 MBps) [2024-11-17T15:06:32.815Z] Copying: 782/1024 [MB] (12 MBps) [2024-11-17T15:06:33.760Z] Copying: 793/1024 [MB] (10 MBps) [2024-11-17T15:06:34.704Z] Copying: 804/1024 [MB] (11 MBps) [2024-11-17T15:06:35.649Z] Copying: 818/1024 [MB] (14 MBps) [2024-11-17T15:06:37.037Z] Copying: 831/1024 [MB] (12 MBps) [2024-11-17T15:06:37.981Z] Copying: 842/1024 [MB] (10 MBps) [2024-11-17T15:06:38.923Z] Copying: 857/1024 [MB] (15 MBps) [2024-11-17T15:06:39.867Z] Copying: 877/1024 [MB] (20 MBps) [2024-11-17T15:06:40.810Z] Copying: 888/1024 [MB] (10 MBps) [2024-11-17T15:06:41.754Z] Copying: 903/1024 [MB] (14 MBps) [2024-11-17T15:06:42.699Z] Copying: 914/1024 [MB] (10 MBps) [2024-11-17T15:06:43.644Z] Copying: 925/1024 [MB] (11 MBps) [2024-11-17T15:06:45.032Z] Copying: 937/1024 [MB] (11 MBps) [2024-11-17T15:06:45.977Z] Copying: 953/1024 [MB] (16 MBps) [2024-11-17T15:06:46.922Z] Copying: 964/1024 [MB] (11 MBps) [2024-11-17T15:06:47.866Z] Copying: 983/1024 [MB] (18 MBps) [2024-11-17T15:06:48.811Z] Copying: 995/1024 [MB] (12 MBps) [2024-11-17T15:06:49.768Z] Copying: 1007/1024 [MB] (11 MBps) [2024-11-17T15:06:50.030Z] Copying: 1019/1024 [MB] (12 MBps) [2024-11-17T15:06:50.605Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-17 15:06:50.290156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.062 [2024-11-17 15:06:50.290241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:05.062 [2024-11-17 15:06:50.290258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:05.062 [2024-11-17 15:06:50.290267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.062 [2024-11-17 15:06:50.290292] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:05.063 [2024-11-17 15:06:50.294149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.063 [2024-11-17 15:06:50.294200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:05.063 [2024-11-17 15:06:50.294214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.838 ms 00:30:05.063 [2024-11-17 15:06:50.294223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.063 [2024-11-17 15:06:50.294469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.063 [2024-11-17 15:06:50.294481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:05.063 [2024-11-17 15:06:50.294491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:30:05.063 [2024-11-17 15:06:50.294499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.063 [2024-11-17 15:06:50.294530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.063 [2024-11-17 15:06:50.294546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:05.063 [2024-11-17 15:06:50.294555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:05.063 [2024-11-17 15:06:50.294564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.063 [2024-11-17 15:06:50.294627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.063 [2024-11-17 15:06:50.294638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:05.063 [2024-11-17 15:06:50.294647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:05.063 [2024-11-17 15:06:50.294654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.063 [2024-11-17 15:06:50.294668] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:05.063 [2024-11-17 15:06:50.294680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.294999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:05.063 [2024-11-17 15:06:50.295305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:05.064 [2024-11-17 15:06:50.295531] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:05.064 [2024-11-17 15:06:50.295539] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8ab1106-8587-4bca-959a-3c4b3782d3c4 00:30:05.064 [2024-11-17 15:06:50.295550] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:05.064 [2024-11-17 15:06:50.295558] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:05.064 [2024-11-17 15:06:50.295565] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:05.064 [2024-11-17 15:06:50.295573] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:05.064 [2024-11-17 15:06:50.295581] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:05.064 [2024-11-17 15:06:50.295589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:05.064 [2024-11-17 15:06:50.295596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:05.064 [2024-11-17 15:06:50.295603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:05.064 [2024-11-17 15:06:50.295609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:05.064 [2024-11-17 15:06:50.295616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.064 [2024-11-17 15:06:50.295623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:05.064 [2024-11-17 15:06:50.295631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:30:05.064 [2024-11-17 15:06:50.295639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.311720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.064 [2024-11-17 15:06:50.311776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:05.064 [2024-11-17 15:06:50.311789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.061 ms 00:30:05.064 [2024-11-17 15:06:50.311798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.312228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.064 [2024-11-17 15:06:50.312268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:05.064 [2024-11-17 15:06:50.312279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:30:05.064 [2024-11-17 15:06:50.312296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.349763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.064 [2024-11-17 15:06:50.349821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:05.064 [2024-11-17 15:06:50.349835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.064 [2024-11-17 15:06:50.349845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.349936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.064 [2024-11-17 15:06:50.349947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:05.064 [2024-11-17 15:06:50.349959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.064 [2024-11-17 15:06:50.349973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.350040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.064 [2024-11-17 15:06:50.350051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:05.064 [2024-11-17 15:06:50.350062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.064 [2024-11-17 15:06:50.350072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.350090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.064 [2024-11-17 15:06:50.350101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:05.064 [2024-11-17 15:06:50.350110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.064 [2024-11-17 15:06:50.350119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.435027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.064 [2024-11-17 15:06:50.435076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:05.064 [2024-11-17 15:06:50.435088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.064 [2024-11-17 15:06:50.435097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.499396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.064 [2024-11-17 15:06:50.499437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:05.064 [2024-11-17 15:06:50.499448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.064 [2024-11-17 15:06:50.499456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.499527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.064 [2024-11-17 15:06:50.499537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:05.064 [2024-11-17 15:06:50.499545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.064 [2024-11-17 15:06:50.499553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.499588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.064 [2024-11-17 15:06:50.499597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:05.064 [2024-11-17 15:06:50.499606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.064 [2024-11-17 15:06:50.499614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.499684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.064 [2024-11-17 15:06:50.499694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:05.064 [2024-11-17 15:06:50.499702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.064 [2024-11-17 15:06:50.499709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.499736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.064 [2024-11-17 15:06:50.499745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:05.064 [2024-11-17 15:06:50.499753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.064 [2024-11-17 15:06:50.499760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.499793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.064 [2024-11-17 15:06:50.499804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:05.064 [2024-11-17 15:06:50.499811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.064 [2024-11-17 15:06:50.499819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.499856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.064 [2024-11-17 15:06:50.499865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:05.064 [2024-11-17 15:06:50.499872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.064 [2024-11-17 15:06:50.499880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.064 [2024-11-17 15:06:50.500020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 209.841 ms, result 0 00:30:05.637 00:30:05.637 00:30:05.898 15:06:51 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:07.815 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:07.815 15:06:53 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:07.815 [2024-11-17 15:06:53.280829] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:07.815 [2024-11-17 15:06:53.280927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82246 ] 00:30:08.076 [2024-11-17 15:06:53.434668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.076 [2024-11-17 15:06:53.538447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.338 [2024-11-17 15:06:53.828855] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:08.338 [2024-11-17 15:06:53.828954] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:08.601 [2024-11-17 15:06:53.991071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.601 [2024-11-17 15:06:53.991128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:08.601 [2024-11-17 15:06:53.991150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:08.601 [2024-11-17 15:06:53.991160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.601 [2024-11-17 15:06:53.991216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.601 [2024-11-17 15:06:53.991228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:08.601 [2024-11-17 15:06:53.991239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:08.601 [2024-11-17 15:06:53.991247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.601 [2024-11-17 15:06:53.991268] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:08.601 [2024-11-17 15:06:53.992076] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:08.601 [2024-11-17 15:06:53.992104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.601 [2024-11-17 15:06:53.992114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:08.601 [2024-11-17 15:06:53.992123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:30:08.601 [2024-11-17 15:06:53.992131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.601 [2024-11-17 15:06:53.992373] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:08.601 [2024-11-17 15:06:53.992395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.601 [2024-11-17 15:06:53.992405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:08.602 [2024-11-17 15:06:53.992419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:08.602 [2024-11-17 15:06:53.992427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.602 [2024-11-17 15:06:53.992479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.602 [2024-11-17 15:06:53.992489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:08.602 [2024-11-17 15:06:53.992497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:08.602 [2024-11-17 15:06:53.992504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.602 [2024-11-17 15:06:53.992772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.602 [2024-11-17 15:06:53.992785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:08.602 [2024-11-17 15:06:53.992794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:30:08.602 [2024-11-17 15:06:53.992802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.602 [2024-11-17 15:06:53.992946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.602 [2024-11-17 15:06:53.992958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:08.602 [2024-11-17 15:06:53.992967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:30:08.602 [2024-11-17 15:06:53.992975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.602 [2024-11-17 15:06:53.993032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.602 [2024-11-17 15:06:53.993043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:08.602 [2024-11-17 15:06:53.993052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:08.602 [2024-11-17 15:06:53.993063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.602 [2024-11-17 15:06:53.993084] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:08.602 [2024-11-17 15:06:53.997398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.602 [2024-11-17 15:06:53.997456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:08.602 [2024-11-17 15:06:53.997468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:30:08.602 [2024-11-17 15:06:53.997476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.602 [2024-11-17 15:06:53.997511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.602 [2024-11-17 15:06:53.997520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:08.602 [2024-11-17 15:06:53.997527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:08.602 [2024-11-17 15:06:53.997534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.602 [2024-11-17 15:06:53.997593] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:08.602 [2024-11-17 15:06:53.997624] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:08.602 [2024-11-17 15:06:53.997669] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:08.602 [2024-11-17 15:06:53.997689] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:08.602 [2024-11-17 15:06:53.997800] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:08.602 [2024-11-17 15:06:53.997817] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:08.602 [2024-11-17 15:06:53.997829] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:08.602 [2024-11-17 15:06:53.997845] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:08.602 [2024-11-17 15:06:53.997855] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:08.602 [2024-11-17 15:06:53.997863] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:08.602 [2024-11-17 15:06:53.997874] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:08.602 [2024-11-17 15:06:53.997882] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:08.602 [2024-11-17 15:06:53.997890] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:08.602 [2024-11-17 15:06:53.997899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.602 [2024-11-17 15:06:53.997906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:08.602 [2024-11-17 15:06:53.997916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:30:08.602 [2024-11-17 15:06:53.997943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.602 [2024-11-17 15:06:53.998026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.602 [2024-11-17 15:06:53.998040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:08.602 [2024-11-17 15:06:53.998049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:08.602 [2024-11-17 15:06:53.998063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.602 [2024-11-17 15:06:53.998168] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:08.602 [2024-11-17 15:06:53.998184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:08.602 [2024-11-17 15:06:53.998197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.602 [2024-11-17 15:06:53.998206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:08.602 [2024-11-17 15:06:53.998226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:08.602 [2024-11-17 15:06:53.998241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:08.602 [2024-11-17 15:06:53.998249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.602 [2024-11-17 15:06:53.998264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:08.602 [2024-11-17 15:06:53.998271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:08.602 [2024-11-17 15:06:53.998278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.602 [2024-11-17 15:06:53.998286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:08.602 [2024-11-17 15:06:53.998294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:08.602 [2024-11-17 15:06:53.998304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:08.602 [2024-11-17 15:06:53.998333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:08.602 [2024-11-17 15:06:53.998340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:08.602 [2024-11-17 15:06:53.998354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.602 [2024-11-17 15:06:53.998369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:08.602 [2024-11-17 15:06:53.998376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.602 [2024-11-17 15:06:53.998390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:08.602 [2024-11-17 15:06:53.998397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.602 [2024-11-17 15:06:53.998411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:08.602 [2024-11-17 15:06:53.998418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.602 [2024-11-17 15:06:53.998432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:08.602 [2024-11-17 15:06:53.998441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.602 [2024-11-17 15:06:53.998455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:08.602 [2024-11-17 15:06:53.998461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:08.602 [2024-11-17 15:06:53.998469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.602 [2024-11-17 15:06:53.998476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:08.602 [2024-11-17 15:06:53.998483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:08.602 [2024-11-17 15:06:53.998490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:08.602 [2024-11-17 15:06:53.998503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:08.602 [2024-11-17 15:06:53.998509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998516] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:08.602 [2024-11-17 15:06:53.998524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:08.602 [2024-11-17 15:06:53.998532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.602 [2024-11-17 15:06:53.998539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.602 [2024-11-17 15:06:53.998547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:08.602 [2024-11-17 15:06:53.998555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:08.602 [2024-11-17 15:06:53.998562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:08.602 [2024-11-17 15:06:53.998569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:08.602 [2024-11-17 15:06:53.998576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:08.602 [2024-11-17 15:06:53.998583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:08.602 [2024-11-17 15:06:53.998592] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:08.603 [2024-11-17 15:06:53.998604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.603 [2024-11-17 15:06:53.998613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:08.603 [2024-11-17 15:06:53.998620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:08.603 [2024-11-17 15:06:53.998627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:08.603 [2024-11-17 15:06:53.998634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:08.603 [2024-11-17 15:06:53.998641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:08.603 [2024-11-17 15:06:53.998648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:08.603 [2024-11-17 15:06:53.998656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:08.603 [2024-11-17 15:06:53.998663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:08.603 [2024-11-17 15:06:53.998670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:08.603 [2024-11-17 15:06:53.998678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:08.603 [2024-11-17 15:06:53.998685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:08.603 [2024-11-17 15:06:53.998692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:08.603 [2024-11-17 15:06:53.998699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:08.603 [2024-11-17 15:06:53.998707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:08.603 [2024-11-17 15:06:53.998714] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:08.603 [2024-11-17 15:06:53.998723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.603 [2024-11-17 15:06:53.998731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:08.603 [2024-11-17 15:06:53.998739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:08.603 [2024-11-17 15:06:53.998746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:08.603 [2024-11-17 15:06:53.998753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:08.603 [2024-11-17 15:06:53.998762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:53.998770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:08.603 [2024-11-17 15:06:53.998781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:30:08.603 [2024-11-17 15:06:53.998789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.026748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.026793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:08.603 [2024-11-17 15:06:54.026805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.916 ms 00:30:08.603 [2024-11-17 15:06:54.026813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.026897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.026906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:08.603 [2024-11-17 15:06:54.026914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:08.603 [2024-11-17 15:06:54.026938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.071846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.071901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:08.603 [2024-11-17 15:06:54.071943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.849 ms 00:30:08.603 [2024-11-17 15:06:54.071952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.072005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.072016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:08.603 [2024-11-17 15:06:54.072025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:08.603 [2024-11-17 15:06:54.072033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.072146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.072166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:08.603 [2024-11-17 15:06:54.072175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:08.603 [2024-11-17 15:06:54.072188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.072317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.072334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:08.603 [2024-11-17 15:06:54.072343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:30:08.603 [2024-11-17 15:06:54.072356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.088080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.088126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:08.603 [2024-11-17 15:06:54.088138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.704 ms 00:30:08.603 [2024-11-17 15:06:54.088146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.088298] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:08.603 [2024-11-17 15:06:54.088318] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:08.603 [2024-11-17 15:06:54.088329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.088345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:08.603 [2024-11-17 15:06:54.088355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:08.603 [2024-11-17 15:06:54.088367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.100825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.100878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:08.603 [2024-11-17 15:06:54.100889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.440 ms 00:30:08.603 [2024-11-17 15:06:54.100898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.101037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.101048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:08.603 [2024-11-17 15:06:54.101057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:30:08.603 [2024-11-17 15:06:54.101070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.101121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.101136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:08.603 [2024-11-17 15:06:54.101145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:08.603 [2024-11-17 15:06:54.101153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.101741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.101758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:08.603 [2024-11-17 15:06:54.101767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:30:08.603 [2024-11-17 15:06:54.101775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.101793] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:08.603 [2024-11-17 15:06:54.101812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.101820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:08.603 [2024-11-17 15:06:54.101828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:08.603 [2024-11-17 15:06:54.101836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.114305] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:08.603 [2024-11-17 15:06:54.114480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.114492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:08.603 [2024-11-17 15:06:54.114502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.625 ms 00:30:08.603 [2024-11-17 15:06:54.114509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.116804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.116833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:08.603 [2024-11-17 15:06:54.116843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.270 ms 00:30:08.603 [2024-11-17 15:06:54.116850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.116958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.116970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:08.603 [2024-11-17 15:06:54.116979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:08.603 [2024-11-17 15:06:54.116987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.117012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.117020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:08.603 [2024-11-17 15:06:54.117032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:08.603 [2024-11-17 15:06:54.117040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.603 [2024-11-17 15:06:54.117074] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:08.603 [2024-11-17 15:06:54.117084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.603 [2024-11-17 15:06:54.117092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:08.603 [2024-11-17 15:06:54.117101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:08.603 [2024-11-17 15:06:54.117108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.865 [2024-11-17 15:06:54.143758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.865 [2024-11-17 15:06:54.143813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:08.865 [2024-11-17 15:06:54.143826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.631 ms 00:30:08.865 [2024-11-17 15:06:54.143834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.865 [2024-11-17 15:06:54.143949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.865 [2024-11-17 15:06:54.143962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:08.865 [2024-11-17 15:06:54.143971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:08.865 [2024-11-17 15:06:54.143979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.865 [2024-11-17 15:06:54.145160] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.591 ms, result 0 00:30:09.810  [2024-11-17T15:06:56.298Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-17T15:06:57.241Z] Copying: 26/1024 [MB] (13 MBps) [2024-11-17T15:06:58.204Z] Copying: 43/1024 [MB] (16 MBps) [2024-11-17T15:06:59.161Z] Copying: 77/1024 [MB] (34 MBps) [2024-11-17T15:07:00.549Z] Copying: 112/1024 [MB] (35 MBps) [2024-11-17T15:07:01.494Z] Copying: 136/1024 [MB] (23 MBps) [2024-11-17T15:07:02.439Z] Copying: 147/1024 [MB] (10 MBps) [2024-11-17T15:07:03.385Z] Copying: 160/1024 [MB] (13 MBps) [2024-11-17T15:07:04.329Z] Copying: 180/1024 [MB] (19 MBps) [2024-11-17T15:07:05.274Z] Copying: 196/1024 [MB] (16 MBps) [2024-11-17T15:07:06.219Z] Copying: 222/1024 [MB] (25 MBps) [2024-11-17T15:07:07.163Z] Copying: 236/1024 [MB] (13 MBps) [2024-11-17T15:07:08.550Z] Copying: 262/1024 [MB] (26 MBps) [2024-11-17T15:07:09.494Z] Copying: 281/1024 [MB] (18 MBps) [2024-11-17T15:07:10.438Z] Copying: 300/1024 [MB] (18 MBps) [2024-11-17T15:07:11.380Z] Copying: 320/1024 [MB] (20 MBps) [2024-11-17T15:07:12.323Z] Copying: 344/1024 [MB] (23 MBps) [2024-11-17T15:07:13.267Z] Copying: 381/1024 [MB] (36 MBps) [2024-11-17T15:07:14.213Z] Copying: 412/1024 [MB] (31 MBps) [2024-11-17T15:07:15.600Z] Copying: 432/1024 [MB] (19 MBps) [2024-11-17T15:07:16.170Z] Copying: 463/1024 [MB] (30 MBps) [2024-11-17T15:07:17.556Z] Copying: 489/1024 [MB] (26 MBps) [2024-11-17T15:07:18.500Z] Copying: 509/1024 [MB] (20 MBps) [2024-11-17T15:07:19.445Z] Copying: 528/1024 [MB] (18 MBps) [2024-11-17T15:07:20.389Z] Copying: 541/1024 [MB] (12 MBps) [2024-11-17T15:07:21.335Z] Copying: 556/1024 [MB] (15 MBps) [2024-11-17T15:07:22.279Z] Copying: 570/1024 [MB] (13 MBps) [2024-11-17T15:07:23.223Z] Copying: 583/1024 [MB] (13 MBps) [2024-11-17T15:07:24.167Z] Copying: 593/1024 [MB] (10 MBps) [2024-11-17T15:07:25.555Z] Copying: 612/1024 [MB] (18 MBps) [2024-11-17T15:07:26.501Z] Copying: 630/1024 [MB] (18 MBps) [2024-11-17T15:07:27.447Z] Copying: 646/1024 [MB] (15 MBps) [2024-11-17T15:07:28.391Z] Copying: 663/1024 [MB] (17 MBps) [2024-11-17T15:07:29.368Z] Copying: 681/1024 [MB] (18 MBps) [2024-11-17T15:07:30.404Z] Copying: 700/1024 [MB] (18 MBps) [2024-11-17T15:07:31.350Z] Copying: 719/1024 [MB] (19 MBps) [2024-11-17T15:07:32.295Z] Copying: 737/1024 [MB] (17 MBps) [2024-11-17T15:07:33.239Z] Copying: 756/1024 [MB] (18 MBps) [2024-11-17T15:07:34.182Z] Copying: 775/1024 [MB] (19 MBps) [2024-11-17T15:07:35.567Z] Copying: 800/1024 [MB] (24 MBps) [2024-11-17T15:07:36.510Z] Copying: 833/1024 [MB] (32 MBps) [2024-11-17T15:07:37.454Z] Copying: 854/1024 [MB] (21 MBps) [2024-11-17T15:07:38.399Z] Copying: 871/1024 [MB] (16 MBps) [2024-11-17T15:07:39.350Z] Copying: 891/1024 [MB] (19 MBps) [2024-11-17T15:07:40.298Z] Copying: 909/1024 [MB] (18 MBps) [2024-11-17T15:07:41.242Z] Copying: 924/1024 [MB] (15 MBps) [2024-11-17T15:07:42.187Z] Copying: 940/1024 [MB] (15 MBps) [2024-11-17T15:07:43.574Z] Copying: 956/1024 [MB] (15 MBps) [2024-11-17T15:07:44.520Z] Copying: 975/1024 [MB] (19 MBps) [2024-11-17T15:07:45.466Z] Copying: 991/1024 [MB] (16 MBps) [2024-11-17T15:07:46.411Z] Copying: 1013/1024 [MB] (22 MBps) [2024-11-17T15:07:46.673Z] Copying: 1048296/1048576 [kB] (10104 kBps) [2024-11-17T15:07:46.673Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-17 15:07:46.471339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.130 [2024-11-17 15:07:46.471392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:01.130 [2024-11-17 15:07:46.471407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:01.130 [2024-11-17 15:07:46.471416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.131 [2024-11-17 15:07:46.474230] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:01.131 [2024-11-17 15:07:46.478070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.131 [2024-11-17 15:07:46.478105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:01.131 [2024-11-17 15:07:46.478116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.802 ms 00:31:01.131 [2024-11-17 15:07:46.478124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.131 [2024-11-17 15:07:46.488260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.131 [2024-11-17 15:07:46.488302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:01.131 [2024-11-17 15:07:46.488313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.007 ms 00:31:01.131 [2024-11-17 15:07:46.488321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.131 [2024-11-17 15:07:46.488346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.131 [2024-11-17 15:07:46.488354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:01.131 [2024-11-17 15:07:46.488363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:01.131 [2024-11-17 15:07:46.488370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.131 [2024-11-17 15:07:46.488413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.131 [2024-11-17 15:07:46.488422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:01.131 [2024-11-17 15:07:46.488432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:01.131 [2024-11-17 15:07:46.488439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.131 [2024-11-17 15:07:46.488452] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:01.131 [2024-11-17 15:07:46.488462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128512 / 261120 wr_cnt: 1 state: open 00:31:01.131 [2024-11-17 15:07:46.488472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:01.131 [2024-11-17 15:07:46.488917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.488935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.488942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.488950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.488957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.488965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.488972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.488979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.488990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.488998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:01.132 [2024-11-17 15:07:46.489226] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:01.132 [2024-11-17 15:07:46.489234] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8ab1106-8587-4bca-959a-3c4b3782d3c4 00:31:01.132 [2024-11-17 15:07:46.489242] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128512 00:31:01.132 [2024-11-17 15:07:46.489249] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128544 00:31:01.132 [2024-11-17 15:07:46.489256] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128512 00:31:01.132 [2024-11-17 15:07:46.489263] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:31:01.132 [2024-11-17 15:07:46.489270] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:01.132 [2024-11-17 15:07:46.489277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:01.132 [2024-11-17 15:07:46.489286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:01.132 [2024-11-17 15:07:46.489292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:01.132 [2024-11-17 15:07:46.489298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:01.132 [2024-11-17 15:07:46.489305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.132 [2024-11-17 15:07:46.489312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:01.132 [2024-11-17 15:07:46.489320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:31:01.132 [2024-11-17 15:07:46.489326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.132 [2024-11-17 15:07:46.501486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.132 [2024-11-17 15:07:46.501517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:01.132 [2024-11-17 15:07:46.501528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.146 ms 00:31:01.132 [2024-11-17 15:07:46.501539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.132 [2024-11-17 15:07:46.501870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.132 [2024-11-17 15:07:46.501884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:01.132 [2024-11-17 15:07:46.501893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:31:01.132 [2024-11-17 15:07:46.501900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.132 [2024-11-17 15:07:46.534025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.132 [2024-11-17 15:07:46.534057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:01.132 [2024-11-17 15:07:46.534069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.132 [2024-11-17 15:07:46.534076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.132 [2024-11-17 15:07:46.534124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.132 [2024-11-17 15:07:46.534132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:01.132 [2024-11-17 15:07:46.534139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.132 [2024-11-17 15:07:46.534146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.132 [2024-11-17 15:07:46.534203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.132 [2024-11-17 15:07:46.534213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:01.132 [2024-11-17 15:07:46.534221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.132 [2024-11-17 15:07:46.534231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.132 [2024-11-17 15:07:46.534245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.132 [2024-11-17 15:07:46.534252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:01.132 [2024-11-17 15:07:46.534259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.132 [2024-11-17 15:07:46.534266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.132 [2024-11-17 15:07:46.610518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.132 [2024-11-17 15:07:46.610558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:01.132 [2024-11-17 15:07:46.610573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.132 [2024-11-17 15:07:46.610581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.394 [2024-11-17 15:07:46.672631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.394 [2024-11-17 15:07:46.672672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:01.394 [2024-11-17 15:07:46.672687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.394 [2024-11-17 15:07:46.672695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.394 [2024-11-17 15:07:46.672741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.395 [2024-11-17 15:07:46.672750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:01.395 [2024-11-17 15:07:46.672758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.395 [2024-11-17 15:07:46.672765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.395 [2024-11-17 15:07:46.672812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.395 [2024-11-17 15:07:46.672820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:01.395 [2024-11-17 15:07:46.672828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.395 [2024-11-17 15:07:46.672836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.395 [2024-11-17 15:07:46.672903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.395 [2024-11-17 15:07:46.672911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:01.395 [2024-11-17 15:07:46.672933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.395 [2024-11-17 15:07:46.672942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.395 [2024-11-17 15:07:46.672969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.395 [2024-11-17 15:07:46.672978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:01.395 [2024-11-17 15:07:46.672986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.395 [2024-11-17 15:07:46.672992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.395 [2024-11-17 15:07:46.673025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.395 [2024-11-17 15:07:46.673034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:01.395 [2024-11-17 15:07:46.673041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.395 [2024-11-17 15:07:46.673048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.395 [2024-11-17 15:07:46.673088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.395 [2024-11-17 15:07:46.673102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:01.395 [2024-11-17 15:07:46.673109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.395 [2024-11-17 15:07:46.673117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.395 [2024-11-17 15:07:46.673221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 204.116 ms, result 0 00:31:02.781 00:31:02.781 00:31:02.781 15:07:48 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:02.781 [2024-11-17 15:07:48.180269] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:31:02.781 [2024-11-17 15:07:48.180387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82816 ] 00:31:03.043 [2024-11-17 15:07:48.341328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.043 [2024-11-17 15:07:48.436781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.304 [2024-11-17 15:07:48.686995] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:03.304 [2024-11-17 15:07:48.687053] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:03.304 [2024-11-17 15:07:48.840757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.304 [2024-11-17 15:07:48.840805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:03.304 [2024-11-17 15:07:48.840822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:03.305 [2024-11-17 15:07:48.840830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.305 [2024-11-17 15:07:48.840876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.305 [2024-11-17 15:07:48.840886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:03.305 [2024-11-17 15:07:48.840896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:03.305 [2024-11-17 15:07:48.840903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.305 [2024-11-17 15:07:48.840933] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:03.305 [2024-11-17 15:07:48.841650] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:03.305 [2024-11-17 15:07:48.841672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.305 [2024-11-17 15:07:48.841681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:03.305 [2024-11-17 15:07:48.841689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:31:03.305 [2024-11-17 15:07:48.841697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.305 [2024-11-17 15:07:48.841970] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:03.305 [2024-11-17 15:07:48.841997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.305 [2024-11-17 15:07:48.842006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:03.305 [2024-11-17 15:07:48.842016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:03.305 [2024-11-17 15:07:48.842024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.305 [2024-11-17 15:07:48.842059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.305 [2024-11-17 15:07:48.842068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:03.305 [2024-11-17 15:07:48.842076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:03.305 [2024-11-17 15:07:48.842083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.305 [2024-11-17 15:07:48.842330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.305 [2024-11-17 15:07:48.842348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:03.305 [2024-11-17 15:07:48.842357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:31:03.305 [2024-11-17 15:07:48.842364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.305 [2024-11-17 15:07:48.842425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.305 [2024-11-17 15:07:48.842439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:03.305 [2024-11-17 15:07:48.842448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:03.305 [2024-11-17 15:07:48.842455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.305 [2024-11-17 15:07:48.842475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.305 [2024-11-17 15:07:48.842484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:03.305 [2024-11-17 15:07:48.842491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:03.305 [2024-11-17 15:07:48.842500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.305 [2024-11-17 15:07:48.842517] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:03.305 [2024-11-17 15:07:48.846006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.305 [2024-11-17 15:07:48.846039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:03.305 [2024-11-17 15:07:48.846049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.492 ms 00:31:03.305 [2024-11-17 15:07:48.846056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.305 [2024-11-17 15:07:48.846087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.305 [2024-11-17 15:07:48.846095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:03.305 [2024-11-17 15:07:48.846103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:03.305 [2024-11-17 15:07:48.846110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.305 [2024-11-17 15:07:48.846147] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:03.305 [2024-11-17 15:07:48.846166] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:03.567 [2024-11-17 15:07:48.846202] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:03.567 [2024-11-17 15:07:48.846216] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:03.567 [2024-11-17 15:07:48.846318] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:03.567 [2024-11-17 15:07:48.846335] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:03.567 [2024-11-17 15:07:48.846345] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:03.567 [2024-11-17 15:07:48.846355] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:03.567 [2024-11-17 15:07:48.846365] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:03.567 [2024-11-17 15:07:48.846372] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:03.567 [2024-11-17 15:07:48.846382] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:03.567 [2024-11-17 15:07:48.846389] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:03.567 [2024-11-17 15:07:48.846396] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:03.567 [2024-11-17 15:07:48.846403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.567 [2024-11-17 15:07:48.846411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:03.567 [2024-11-17 15:07:48.846418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:31:03.567 [2024-11-17 15:07:48.846425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.567 [2024-11-17 15:07:48.846506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.567 [2024-11-17 15:07:48.846519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:03.567 [2024-11-17 15:07:48.846527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:03.567 [2024-11-17 15:07:48.846536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.567 [2024-11-17 15:07:48.846636] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:03.567 [2024-11-17 15:07:48.846646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:03.567 [2024-11-17 15:07:48.846654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:03.567 [2024-11-17 15:07:48.846661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:03.567 [2024-11-17 15:07:48.846668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:03.568 [2024-11-17 15:07:48.846675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:03.568 [2024-11-17 15:07:48.846681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:03.568 [2024-11-17 15:07:48.846688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:03.568 [2024-11-17 15:07:48.846695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:03.568 [2024-11-17 15:07:48.846702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:03.568 [2024-11-17 15:07:48.846708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:03.568 [2024-11-17 15:07:48.846715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:03.568 [2024-11-17 15:07:48.846721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:03.568 [2024-11-17 15:07:48.846727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:03.568 [2024-11-17 15:07:48.846735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:03.568 [2024-11-17 15:07:48.846741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:03.568 [2024-11-17 15:07:48.846748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:03.568 [2024-11-17 15:07:48.846760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:03.568 [2024-11-17 15:07:48.846766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:03.568 [2024-11-17 15:07:48.846772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:03.568 [2024-11-17 15:07:48.846779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:03.568 [2024-11-17 15:07:48.846785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:03.568 [2024-11-17 15:07:48.846791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:03.568 [2024-11-17 15:07:48.846797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:03.568 [2024-11-17 15:07:48.846804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:03.568 [2024-11-17 15:07:48.846810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:03.568 [2024-11-17 15:07:48.846816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:03.568 [2024-11-17 15:07:48.846822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:03.568 [2024-11-17 15:07:48.846828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:03.568 [2024-11-17 15:07:48.846834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:03.568 [2024-11-17 15:07:48.846841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:03.568 [2024-11-17 15:07:48.846847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:03.568 [2024-11-17 15:07:48.846853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:03.568 [2024-11-17 15:07:48.846860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:03.568 [2024-11-17 15:07:48.846867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:03.568 [2024-11-17 15:07:48.846873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:03.568 [2024-11-17 15:07:48.846879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:03.568 [2024-11-17 15:07:48.846886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:03.568 [2024-11-17 15:07:48.846892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:03.568 [2024-11-17 15:07:48.846898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:03.568 [2024-11-17 15:07:48.846905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:03.568 [2024-11-17 15:07:48.846911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:03.568 [2024-11-17 15:07:48.846928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:03.568 [2024-11-17 15:07:48.846935] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:03.568 [2024-11-17 15:07:48.846942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:03.568 [2024-11-17 15:07:48.846949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:03.568 [2024-11-17 15:07:48.846957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:03.568 [2024-11-17 15:07:48.846964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:03.568 [2024-11-17 15:07:48.846971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:03.568 [2024-11-17 15:07:48.846978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:03.568 [2024-11-17 15:07:48.846985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:03.568 [2024-11-17 15:07:48.846991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:03.568 [2024-11-17 15:07:48.846998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:03.568 [2024-11-17 15:07:48.847006] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:03.568 [2024-11-17 15:07:48.847017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:03.568 [2024-11-17 15:07:48.847026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:03.568 [2024-11-17 15:07:48.847033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:03.568 [2024-11-17 15:07:48.847040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:03.568 [2024-11-17 15:07:48.847047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:03.568 [2024-11-17 15:07:48.847054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:03.568 [2024-11-17 15:07:48.847060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:03.568 [2024-11-17 15:07:48.847068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:03.568 [2024-11-17 15:07:48.847074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:03.568 [2024-11-17 15:07:48.847081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:03.568 [2024-11-17 15:07:48.847088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:03.568 [2024-11-17 15:07:48.847095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:03.568 [2024-11-17 15:07:48.847102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:03.568 [2024-11-17 15:07:48.847109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:03.568 [2024-11-17 15:07:48.847116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:03.568 [2024-11-17 15:07:48.847123] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:03.568 [2024-11-17 15:07:48.847130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:03.568 [2024-11-17 15:07:48.847139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:03.568 [2024-11-17 15:07:48.847146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:03.568 [2024-11-17 15:07:48.847153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:03.568 [2024-11-17 15:07:48.847160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:03.568 [2024-11-17 15:07:48.847167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.568 [2024-11-17 15:07:48.847175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:03.569 [2024-11-17 15:07:48.847181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:31:03.569 [2024-11-17 15:07:48.847190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.870279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.870313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:03.569 [2024-11-17 15:07:48.870323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.050 ms 00:31:03.569 [2024-11-17 15:07:48.870332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.870409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.870417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:03.569 [2024-11-17 15:07:48.870425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:31:03.569 [2024-11-17 15:07:48.870434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.910843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.910883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:03.569 [2024-11-17 15:07:48.910894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.362 ms 00:31:03.569 [2024-11-17 15:07:48.910902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.910953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.910963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:03.569 [2024-11-17 15:07:48.910972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:03.569 [2024-11-17 15:07:48.910979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.911066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.911077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:03.569 [2024-11-17 15:07:48.911085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:03.569 [2024-11-17 15:07:48.911093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.911202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.911212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:03.569 [2024-11-17 15:07:48.911219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:31:03.569 [2024-11-17 15:07:48.911227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.923971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.924005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:03.569 [2024-11-17 15:07:48.924015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.726 ms 00:31:03.569 [2024-11-17 15:07:48.924022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.924123] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:03.569 [2024-11-17 15:07:48.924134] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:03.569 [2024-11-17 15:07:48.924144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.924152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:03.569 [2024-11-17 15:07:48.924162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:03.569 [2024-11-17 15:07:48.924169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.936555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.936584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:03.569 [2024-11-17 15:07:48.936593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.371 ms 00:31:03.569 [2024-11-17 15:07:48.936600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.936706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.936718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:03.569 [2024-11-17 15:07:48.936726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:31:03.569 [2024-11-17 15:07:48.936733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.936794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.936805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:03.569 [2024-11-17 15:07:48.936812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:03.569 [2024-11-17 15:07:48.936819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.937390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.937412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:03.569 [2024-11-17 15:07:48.937420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:31:03.569 [2024-11-17 15:07:48.937427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.937442] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:03.569 [2024-11-17 15:07:48.937455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.937463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:03.569 [2024-11-17 15:07:48.937470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:03.569 [2024-11-17 15:07:48.937478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.948241] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:03.569 [2024-11-17 15:07:48.948373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.948410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:03.569 [2024-11-17 15:07:48.948423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.879 ms 00:31:03.569 [2024-11-17 15:07:48.948431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.950602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.950628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:03.569 [2024-11-17 15:07:48.950636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.154 ms 00:31:03.569 [2024-11-17 15:07:48.950643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.950700] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:03.569 [2024-11-17 15:07:48.951163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.951182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:03.569 [2024-11-17 15:07:48.951191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:31:03.569 [2024-11-17 15:07:48.951198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.951219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.951231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:03.569 [2024-11-17 15:07:48.951239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:03.569 [2024-11-17 15:07:48.951246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.951273] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:03.569 [2024-11-17 15:07:48.951282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.951289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:03.569 [2024-11-17 15:07:48.951297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:03.569 [2024-11-17 15:07:48.951304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.974616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.974653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:03.569 [2024-11-17 15:07:48.974664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.296 ms 00:31:03.569 [2024-11-17 15:07:48.974672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.974747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.569 [2024-11-17 15:07:48.974757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:03.569 [2024-11-17 15:07:48.974765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:03.569 [2024-11-17 15:07:48.974772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.569 [2024-11-17 15:07:48.975879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.730 ms, result 0 00:31:04.958  [2024-11-17T15:07:51.446Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-17T15:07:52.392Z] Copying: 28/1024 [MB] (12 MBps) [2024-11-17T15:07:53.338Z] Copying: 43/1024 [MB] (14 MBps) [2024-11-17T15:07:54.282Z] Copying: 60/1024 [MB] (17 MBps) [2024-11-17T15:07:55.228Z] Copying: 78/1024 [MB] (18 MBps) [2024-11-17T15:07:56.171Z] Copying: 93/1024 [MB] (14 MBps) [2024-11-17T15:07:57.559Z] Copying: 112/1024 [MB] (18 MBps) [2024-11-17T15:07:58.502Z] Copying: 134/1024 [MB] (22 MBps) [2024-11-17T15:07:59.446Z] Copying: 151/1024 [MB] (16 MBps) [2024-11-17T15:08:00.389Z] Copying: 162/1024 [MB] (10 MBps) [2024-11-17T15:08:01.402Z] Copying: 174/1024 [MB] (11 MBps) [2024-11-17T15:08:02.351Z] Copying: 199/1024 [MB] (25 MBps) [2024-11-17T15:08:03.296Z] Copying: 219/1024 [MB] (19 MBps) [2024-11-17T15:08:04.287Z] Copying: 237/1024 [MB] (18 MBps) [2024-11-17T15:08:05.231Z] Copying: 257/1024 [MB] (19 MBps) [2024-11-17T15:08:06.292Z] Copying: 277/1024 [MB] (19 MBps) [2024-11-17T15:08:07.237Z] Copying: 295/1024 [MB] (18 MBps) [2024-11-17T15:08:08.181Z] Copying: 310/1024 [MB] (15 MBps) [2024-11-17T15:08:09.569Z] Copying: 327/1024 [MB] (17 MBps) [2024-11-17T15:08:10.514Z] Copying: 343/1024 [MB] (15 MBps) [2024-11-17T15:08:11.458Z] Copying: 359/1024 [MB] (16 MBps) [2024-11-17T15:08:12.401Z] Copying: 372/1024 [MB] (12 MBps) [2024-11-17T15:08:13.343Z] Copying: 382/1024 [MB] (10 MBps) [2024-11-17T15:08:14.286Z] Copying: 393/1024 [MB] (10 MBps) [2024-11-17T15:08:15.229Z] Copying: 414/1024 [MB] (20 MBps) [2024-11-17T15:08:16.174Z] Copying: 425/1024 [MB] (11 MBps) [2024-11-17T15:08:17.561Z] Copying: 440/1024 [MB] (14 MBps) [2024-11-17T15:08:18.504Z] Copying: 460/1024 [MB] (19 MBps) [2024-11-17T15:08:19.458Z] Copying: 480/1024 [MB] (20 MBps) [2024-11-17T15:08:20.404Z] Copying: 493/1024 [MB] (12 MBps) [2024-11-17T15:08:21.349Z] Copying: 508/1024 [MB] (15 MBps) [2024-11-17T15:08:22.293Z] Copying: 533/1024 [MB] (24 MBps) [2024-11-17T15:08:23.237Z] Copying: 551/1024 [MB] (18 MBps) [2024-11-17T15:08:24.182Z] Copying: 567/1024 [MB] (16 MBps) [2024-11-17T15:08:25.570Z] Copying: 579/1024 [MB] (12 MBps) [2024-11-17T15:08:26.516Z] Copying: 604/1024 [MB] (24 MBps) [2024-11-17T15:08:27.458Z] Copying: 623/1024 [MB] (19 MBps) [2024-11-17T15:08:28.403Z] Copying: 637/1024 [MB] (13 MBps) [2024-11-17T15:08:29.349Z] Copying: 653/1024 [MB] (16 MBps) [2024-11-17T15:08:30.292Z] Copying: 672/1024 [MB] (19 MBps) [2024-11-17T15:08:31.235Z] Copying: 691/1024 [MB] (19 MBps) [2024-11-17T15:08:32.183Z] Copying: 711/1024 [MB] (19 MBps) [2024-11-17T15:08:33.658Z] Copying: 724/1024 [MB] (13 MBps) [2024-11-17T15:08:34.232Z] Copying: 737/1024 [MB] (12 MBps) [2024-11-17T15:08:35.177Z] Copying: 756/1024 [MB] (19 MBps) [2024-11-17T15:08:36.566Z] Copying: 780/1024 [MB] (23 MBps) [2024-11-17T15:08:37.510Z] Copying: 793/1024 [MB] (13 MBps) [2024-11-17T15:08:38.455Z] Copying: 804/1024 [MB] (10 MBps) [2024-11-17T15:08:39.400Z] Copying: 821/1024 [MB] (17 MBps) [2024-11-17T15:08:40.345Z] Copying: 838/1024 [MB] (16 MBps) [2024-11-17T15:08:41.290Z] Copying: 857/1024 [MB] (18 MBps) [2024-11-17T15:08:42.233Z] Copying: 878/1024 [MB] (21 MBps) [2024-11-17T15:08:43.179Z] Copying: 899/1024 [MB] (20 MBps) [2024-11-17T15:08:44.568Z] Copying: 921/1024 [MB] (22 MBps) [2024-11-17T15:08:45.511Z] Copying: 937/1024 [MB] (15 MBps) [2024-11-17T15:08:46.455Z] Copying: 959/1024 [MB] (22 MBps) [2024-11-17T15:08:47.401Z] Copying: 985/1024 [MB] (26 MBps) [2024-11-17T15:08:47.664Z] Copying: 1015/1024 [MB] (29 MBps) [2024-11-17T15:08:47.664Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-17 15:08:47.547571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.121 [2024-11-17 15:08:47.547644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:02.121 [2024-11-17 15:08:47.547660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:02.121 [2024-11-17 15:08:47.547670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.121 [2024-11-17 15:08:47.547693] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:02.121 [2024-11-17 15:08:47.550702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.121 [2024-11-17 15:08:47.550739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:02.121 [2024-11-17 15:08:47.550751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:32:02.121 [2024-11-17 15:08:47.550760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.121 [2024-11-17 15:08:47.551014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.121 [2024-11-17 15:08:47.551033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:02.121 [2024-11-17 15:08:47.551043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:32:02.121 [2024-11-17 15:08:47.551052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.121 [2024-11-17 15:08:47.551081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.121 [2024-11-17 15:08:47.551090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:02.121 [2024-11-17 15:08:47.551099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:02.121 [2024-11-17 15:08:47.551108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.121 [2024-11-17 15:08:47.551158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.121 [2024-11-17 15:08:47.551167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:02.121 [2024-11-17 15:08:47.551178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:02.121 [2024-11-17 15:08:47.551186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.121 [2024-11-17 15:08:47.551200] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:02.121 [2024-11-17 15:08:47.551213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:02.121 [2024-11-17 15:08:47.551223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.551969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.552083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.552092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.552101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:02.121 [2024-11-17 15:08:47.552110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:02.122 [2024-11-17 15:08:47.552466] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:02.122 [2024-11-17 15:08:47.552475] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8ab1106-8587-4bca-959a-3c4b3782d3c4 00:32:02.122 [2024-11-17 15:08:47.552483] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:02.122 [2024-11-17 15:08:47.552491] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2592 00:32:02.122 [2024-11-17 15:08:47.553053] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2560 00:32:02.122 [2024-11-17 15:08:47.553065] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:32:02.122 [2024-11-17 15:08:47.553073] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:02.122 [2024-11-17 15:08:47.553086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:02.122 [2024-11-17 15:08:47.553094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:02.122 [2024-11-17 15:08:47.553102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:02.122 [2024-11-17 15:08:47.553109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:02.122 [2024-11-17 15:08:47.553118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.122 [2024-11-17 15:08:47.553127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:02.122 [2024-11-17 15:08:47.553136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.918 ms 00:32:02.122 [2024-11-17 15:08:47.553144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.122 [2024-11-17 15:08:47.566410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.122 [2024-11-17 15:08:47.566445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:02.122 [2024-11-17 15:08:47.566456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.243 ms 00:32:02.122 [2024-11-17 15:08:47.566469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.122 [2024-11-17 15:08:47.566819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.122 [2024-11-17 15:08:47.566832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:02.122 [2024-11-17 15:08:47.566841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:32:02.122 [2024-11-17 15:08:47.566848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.122 [2024-11-17 15:08:47.601733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.122 [2024-11-17 15:08:47.601782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:02.122 [2024-11-17 15:08:47.601793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.122 [2024-11-17 15:08:47.601801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.122 [2024-11-17 15:08:47.601857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.122 [2024-11-17 15:08:47.601866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:02.122 [2024-11-17 15:08:47.601873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.122 [2024-11-17 15:08:47.601881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.122 [2024-11-17 15:08:47.601951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.122 [2024-11-17 15:08:47.601962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:02.122 [2024-11-17 15:08:47.601973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.122 [2024-11-17 15:08:47.601981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.122 [2024-11-17 15:08:47.601997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.122 [2024-11-17 15:08:47.602005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:02.122 [2024-11-17 15:08:47.602012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.122 [2024-11-17 15:08:47.602019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.383 [2024-11-17 15:08:47.686192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.383 [2024-11-17 15:08:47.686259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:02.383 [2024-11-17 15:08:47.686272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.383 [2024-11-17 15:08:47.686281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.383 [2024-11-17 15:08:47.756241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.383 [2024-11-17 15:08:47.756303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:02.383 [2024-11-17 15:08:47.756315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.383 [2024-11-17 15:08:47.756324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.383 [2024-11-17 15:08:47.756402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.383 [2024-11-17 15:08:47.756411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:02.383 [2024-11-17 15:08:47.756421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.383 [2024-11-17 15:08:47.756433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.383 [2024-11-17 15:08:47.756470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.383 [2024-11-17 15:08:47.756479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:02.383 [2024-11-17 15:08:47.756487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.383 [2024-11-17 15:08:47.756495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.383 [2024-11-17 15:08:47.756570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.383 [2024-11-17 15:08:47.756581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:02.383 [2024-11-17 15:08:47.756589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.383 [2024-11-17 15:08:47.756597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.383 [2024-11-17 15:08:47.756629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.383 [2024-11-17 15:08:47.756638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:02.383 [2024-11-17 15:08:47.756646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.383 [2024-11-17 15:08:47.756653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.383 [2024-11-17 15:08:47.756690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.383 [2024-11-17 15:08:47.756700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:02.383 [2024-11-17 15:08:47.756708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.383 [2024-11-17 15:08:47.756716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.383 [2024-11-17 15:08:47.756759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.383 [2024-11-17 15:08:47.756769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:02.383 [2024-11-17 15:08:47.756777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.383 [2024-11-17 15:08:47.756784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.383 [2024-11-17 15:08:47.756907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 209.303 ms, result 0 00:32:02.955 00:32:02.955 00:32:02.955 15:08:48 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:05.504 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:05.504 15:08:50 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:05.504 15:08:50 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:05.504 15:08:50 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:05.504 15:08:50 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:05.504 15:08:50 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:05.504 15:08:50 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 80848 00:32:05.504 15:08:50 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 80848 ']' 00:32:05.504 15:08:50 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 80848 00:32:05.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80848) - No such process 00:32:05.504 Process with pid 80848 is not found 00:32:05.504 15:08:50 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 80848 is not found' 00:32:05.504 15:08:50 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:05.504 Remove shared memory files 00:32:05.504 15:08:50 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:05.504 15:08:50 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:05.505 15:08:50 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_f8ab1106-8587-4bca-959a-3c4b3782d3c4_band_md /dev/hugepages/ftl_f8ab1106-8587-4bca-959a-3c4b3782d3c4_l2p_l1 /dev/hugepages/ftl_f8ab1106-8587-4bca-959a-3c4b3782d3c4_l2p_l2 /dev/hugepages/ftl_f8ab1106-8587-4bca-959a-3c4b3782d3c4_l2p_l2_ctx /dev/hugepages/ftl_f8ab1106-8587-4bca-959a-3c4b3782d3c4_nvc_md /dev/hugepages/ftl_f8ab1106-8587-4bca-959a-3c4b3782d3c4_p2l_pool /dev/hugepages/ftl_f8ab1106-8587-4bca-959a-3c4b3782d3c4_sb /dev/hugepages/ftl_f8ab1106-8587-4bca-959a-3c4b3782d3c4_sb_shm /dev/hugepages/ftl_f8ab1106-8587-4bca-959a-3c4b3782d3c4_trim_bitmap /dev/hugepages/ftl_f8ab1106-8587-4bca-959a-3c4b3782d3c4_trim_log /dev/hugepages/ftl_f8ab1106-8587-4bca-959a-3c4b3782d3c4_trim_md /dev/hugepages/ftl_f8ab1106-8587-4bca-959a-3c4b3782d3c4_vmap 00:32:05.505 15:08:50 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:05.505 15:08:50 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:05.505 15:08:50 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:05.505 ************************************ 00:32:05.505 END TEST ftl_restore_fast 00:32:05.505 ************************************ 00:32:05.505 00:32:05.505 real 4m14.213s 00:32:05.505 user 4m1.924s 00:32:05.505 sys 0m12.199s 00:32:05.505 15:08:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.505 15:08:50 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:05.505 15:08:50 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:05.505 15:08:50 ftl -- ftl/ftl.sh@14 -- # killprocess 72121 00:32:05.505 15:08:50 ftl -- common/autotest_common.sh@954 -- # '[' -z 72121 ']' 00:32:05.505 15:08:50 ftl -- common/autotest_common.sh@958 -- # kill -0 72121 00:32:05.505 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72121) - No such process 00:32:05.505 Process with pid 72121 is not found 00:32:05.505 15:08:50 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 72121 is not found' 00:32:05.505 15:08:50 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:05.505 15:08:50 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83500 00:32:05.505 15:08:50 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83500 00:32:05.505 15:08:50 ftl -- common/autotest_common.sh@835 -- # '[' -z 83500 ']' 00:32:05.505 15:08:50 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.505 15:08:50 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:05.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.505 15:08:50 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.505 15:08:50 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:05.505 15:08:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:05.505 15:08:50 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:05.505 [2024-11-17 15:08:50.699615] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:32:05.505 [2024-11-17 15:08:50.699736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83500 ] 00:32:05.505 [2024-11-17 15:08:50.860759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.505 [2024-11-17 15:08:50.961933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.077 15:08:51 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:06.077 15:08:51 ftl -- common/autotest_common.sh@868 -- # return 0 00:32:06.077 15:08:51 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:06.337 nvme0n1 00:32:06.598 15:08:51 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:06.598 15:08:51 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:06.598 15:08:51 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:06.598 15:08:52 ftl -- ftl/common.sh@28 -- # stores=12aa37be-7ca4-473a-87df-6586a041f523 00:32:06.598 15:08:52 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:06.598 15:08:52 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12aa37be-7ca4-473a-87df-6586a041f523 00:32:06.859 15:08:52 ftl -- ftl/ftl.sh@23 -- # killprocess 83500 00:32:06.859 15:08:52 ftl -- common/autotest_common.sh@954 -- # '[' -z 83500 ']' 00:32:06.859 15:08:52 ftl -- common/autotest_common.sh@958 -- # kill -0 83500 00:32:06.859 15:08:52 ftl -- common/autotest_common.sh@959 -- # uname 00:32:06.859 15:08:52 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:06.859 15:08:52 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83500 00:32:06.859 15:08:52 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:06.859 killing process with pid 83500 00:32:06.859 15:08:52 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:06.859 15:08:52 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83500' 00:32:06.859 15:08:52 ftl -- common/autotest_common.sh@973 -- # kill 83500 00:32:06.859 15:08:52 ftl -- common/autotest_common.sh@978 -- # wait 83500 00:32:08.243 15:08:53 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:08.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:08.503 Waiting for block devices as requested 00:32:08.503 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:08.503 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:08.503 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:08.764 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:14.050 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:14.050 Remove shared memory files 00:32:14.050 15:08:59 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:14.050 15:08:59 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:14.050 15:08:59 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:14.050 15:08:59 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:14.050 15:08:59 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:14.050 15:08:59 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:14.050 15:08:59 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:14.050 ************************************ 00:32:14.050 END TEST ftl 00:32:14.050 ************************************ 00:32:14.050 00:32:14.050 real 17m3.194s 00:32:14.050 user 18m49.468s 00:32:14.050 sys 1m43.370s 00:32:14.050 15:08:59 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:14.050 15:08:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:14.050 15:08:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:14.050 15:08:59 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:14.050 15:08:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:14.050 15:08:59 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:14.050 15:08:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:14.050 15:08:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:14.050 15:08:59 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:14.050 15:08:59 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:14.050 15:08:59 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:14.050 15:08:59 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:14.050 15:08:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:14.050 15:08:59 -- common/autotest_common.sh@10 -- # set +x 00:32:14.050 15:08:59 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:14.050 15:08:59 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:14.050 15:08:59 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:14.050 15:08:59 -- common/autotest_common.sh@10 -- # set +x 00:32:15.438 INFO: APP EXITING 00:32:15.438 INFO: killing all VMs 00:32:15.438 INFO: killing vhost app 00:32:15.438 INFO: EXIT DONE 00:32:15.438 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:16.012 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:16.012 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:16.012 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:16.012 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:16.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:16.845 Cleaning 00:32:16.845 Removing: /var/run/dpdk/spdk0/config 00:32:16.845 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:16.845 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:16.845 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:16.845 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:16.845 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:16.845 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:16.845 Removing: /var/run/dpdk/spdk0 00:32:16.845 Removing: /var/run/dpdk/spdk_pid56869 00:32:16.845 Removing: /var/run/dpdk/spdk_pid57060 00:32:16.845 Removing: /var/run/dpdk/spdk_pid57273 00:32:16.845 Removing: /var/run/dpdk/spdk_pid57371 00:32:16.845 Removing: /var/run/dpdk/spdk_pid57405 00:32:16.845 Removing: /var/run/dpdk/spdk_pid57522 00:32:16.845 Removing: /var/run/dpdk/spdk_pid57540 00:32:16.845 Removing: /var/run/dpdk/spdk_pid57734 00:32:16.845 Removing: /var/run/dpdk/spdk_pid57827 00:32:16.845 Removing: /var/run/dpdk/spdk_pid57917 00:32:16.845 Removing: /var/run/dpdk/spdk_pid58027 00:32:16.845 Removing: /var/run/dpdk/spdk_pid58119 00:32:16.845 Removing: /var/run/dpdk/spdk_pid58154 00:32:16.845 Removing: /var/run/dpdk/spdk_pid58190 00:32:16.845 Removing: /var/run/dpdk/spdk_pid58261 00:32:16.845 Removing: /var/run/dpdk/spdk_pid58345 00:32:16.845 Removing: /var/run/dpdk/spdk_pid58770 00:32:16.845 Removing: /var/run/dpdk/spdk_pid58823 00:32:16.845 Removing: /var/run/dpdk/spdk_pid58875 00:32:16.845 Removing: /var/run/dpdk/spdk_pid58891 00:32:16.845 Removing: /var/run/dpdk/spdk_pid58982 00:32:16.845 Removing: /var/run/dpdk/spdk_pid58998 00:32:16.845 Removing: /var/run/dpdk/spdk_pid59089 00:32:16.845 Removing: /var/run/dpdk/spdk_pid59105 00:32:16.845 Removing: /var/run/dpdk/spdk_pid59158 00:32:16.845 Removing: /var/run/dpdk/spdk_pid59176 00:32:16.845 Removing: /var/run/dpdk/spdk_pid59229 00:32:16.845 Removing: /var/run/dpdk/spdk_pid59247 00:32:16.845 Removing: /var/run/dpdk/spdk_pid59396 00:32:16.845 Removing: /var/run/dpdk/spdk_pid59438 00:32:16.845 Removing: /var/run/dpdk/spdk_pid59516 00:32:16.845 Removing: /var/run/dpdk/spdk_pid59694 00:32:16.845 Removing: /var/run/dpdk/spdk_pid59778 00:32:16.845 Removing: /var/run/dpdk/spdk_pid59814 00:32:16.845 Removing: /var/run/dpdk/spdk_pid60241 00:32:16.845 Removing: /var/run/dpdk/spdk_pid60339 00:32:16.845 Removing: /var/run/dpdk/spdk_pid60448 00:32:16.845 Removing: /var/run/dpdk/spdk_pid60502 00:32:16.845 Removing: /var/run/dpdk/spdk_pid60522 00:32:16.845 Removing: /var/run/dpdk/spdk_pid60606 00:32:16.845 Removing: /var/run/dpdk/spdk_pid61226 00:32:16.845 Removing: /var/run/dpdk/spdk_pid61263 00:32:16.845 Removing: /var/run/dpdk/spdk_pid61736 00:32:16.845 Removing: /var/run/dpdk/spdk_pid61834 00:32:16.845 Removing: /var/run/dpdk/spdk_pid61944 00:32:16.845 Removing: /var/run/dpdk/spdk_pid61998 00:32:16.845 Removing: /var/run/dpdk/spdk_pid62018 00:32:16.845 Removing: /var/run/dpdk/spdk_pid62049 00:32:16.845 Removing: /var/run/dpdk/spdk_pid63888 00:32:16.845 Removing: /var/run/dpdk/spdk_pid64014 00:32:16.845 Removing: /var/run/dpdk/spdk_pid64025 00:32:16.845 Removing: /var/run/dpdk/spdk_pid64042 00:32:16.845 Removing: /var/run/dpdk/spdk_pid64084 00:32:16.845 Removing: /var/run/dpdk/spdk_pid64088 00:32:16.845 Removing: /var/run/dpdk/spdk_pid64100 00:32:16.845 Removing: /var/run/dpdk/spdk_pid64145 00:32:16.845 Removing: /var/run/dpdk/spdk_pid64149 00:32:16.845 Removing: /var/run/dpdk/spdk_pid64161 00:32:16.845 Removing: /var/run/dpdk/spdk_pid64206 00:32:16.845 Removing: /var/run/dpdk/spdk_pid64210 00:32:16.845 Removing: /var/run/dpdk/spdk_pid64222 00:32:16.845 Removing: /var/run/dpdk/spdk_pid65591 00:32:16.845 Removing: /var/run/dpdk/spdk_pid65694 00:32:16.845 Removing: /var/run/dpdk/spdk_pid67104 00:32:16.845 Removing: /var/run/dpdk/spdk_pid68503 00:32:16.845 Removing: /var/run/dpdk/spdk_pid68585 00:32:16.845 Removing: /var/run/dpdk/spdk_pid68661 00:32:16.845 Removing: /var/run/dpdk/spdk_pid68737 00:32:16.845 Removing: /var/run/dpdk/spdk_pid68836 00:32:16.845 Removing: /var/run/dpdk/spdk_pid68915 00:32:16.845 Removing: /var/run/dpdk/spdk_pid69058 00:32:16.845 Removing: /var/run/dpdk/spdk_pid69412 00:32:16.845 Removing: /var/run/dpdk/spdk_pid69443 00:32:16.845 Removing: /var/run/dpdk/spdk_pid69890 00:32:16.845 Removing: /var/run/dpdk/spdk_pid70074 00:32:16.845 Removing: /var/run/dpdk/spdk_pid70173 00:32:16.845 Removing: /var/run/dpdk/spdk_pid70284 00:32:16.845 Removing: /var/run/dpdk/spdk_pid70333 00:32:16.845 Removing: /var/run/dpdk/spdk_pid70359 00:32:16.845 Removing: /var/run/dpdk/spdk_pid70644 00:32:16.845 Removing: /var/run/dpdk/spdk_pid70707 00:32:16.845 Removing: /var/run/dpdk/spdk_pid70780 00:32:16.845 Removing: /var/run/dpdk/spdk_pid71164 00:32:16.845 Removing: /var/run/dpdk/spdk_pid71310 00:32:16.845 Removing: /var/run/dpdk/spdk_pid72121 00:32:16.845 Removing: /var/run/dpdk/spdk_pid72256 00:32:16.845 Removing: /var/run/dpdk/spdk_pid72420 00:32:16.845 Removing: /var/run/dpdk/spdk_pid72523 00:32:16.845 Removing: /var/run/dpdk/spdk_pid72830 00:32:16.845 Removing: /var/run/dpdk/spdk_pid73128 00:32:16.845 Removing: /var/run/dpdk/spdk_pid73485 00:32:16.845 Removing: /var/run/dpdk/spdk_pid73665 00:32:16.845 Removing: /var/run/dpdk/spdk_pid73835 00:32:16.845 Removing: /var/run/dpdk/spdk_pid73894 00:32:16.845 Removing: /var/run/dpdk/spdk_pid74065 00:32:16.845 Removing: /var/run/dpdk/spdk_pid74095 00:32:17.106 Removing: /var/run/dpdk/spdk_pid74150 00:32:17.106 Removing: /var/run/dpdk/spdk_pid74365 00:32:17.106 Removing: /var/run/dpdk/spdk_pid74601 00:32:17.106 Removing: /var/run/dpdk/spdk_pid75195 00:32:17.106 Removing: /var/run/dpdk/spdk_pid75948 00:32:17.106 Removing: /var/run/dpdk/spdk_pid76415 00:32:17.106 Removing: /var/run/dpdk/spdk_pid77283 00:32:17.106 Removing: /var/run/dpdk/spdk_pid77430 00:32:17.106 Removing: /var/run/dpdk/spdk_pid77520 00:32:17.106 Removing: /var/run/dpdk/spdk_pid77971 00:32:17.106 Removing: /var/run/dpdk/spdk_pid78025 00:32:17.106 Removing: /var/run/dpdk/spdk_pid78543 00:32:17.106 Removing: /var/run/dpdk/spdk_pid79042 00:32:17.106 Removing: /var/run/dpdk/spdk_pid79835 00:32:17.106 Removing: /var/run/dpdk/spdk_pid79957 00:32:17.106 Removing: /var/run/dpdk/spdk_pid80004 00:32:17.106 Removing: /var/run/dpdk/spdk_pid80063 00:32:17.106 Removing: /var/run/dpdk/spdk_pid80124 00:32:17.106 Removing: /var/run/dpdk/spdk_pid80191 00:32:17.106 Removing: /var/run/dpdk/spdk_pid80379 00:32:17.106 Removing: /var/run/dpdk/spdk_pid80448 00:32:17.106 Removing: /var/run/dpdk/spdk_pid80521 00:32:17.106 Removing: /var/run/dpdk/spdk_pid80601 00:32:17.106 Removing: /var/run/dpdk/spdk_pid80636 00:32:17.106 Removing: /var/run/dpdk/spdk_pid80696 00:32:17.106 Removing: /var/run/dpdk/spdk_pid80848 00:32:17.106 Removing: /var/run/dpdk/spdk_pid81081 00:32:17.106 Removing: /var/run/dpdk/spdk_pid81577 00:32:17.106 Removing: /var/run/dpdk/spdk_pid82246 00:32:17.106 Removing: /var/run/dpdk/spdk_pid82816 00:32:17.106 Removing: /var/run/dpdk/spdk_pid83500 00:32:17.106 Clean 00:32:17.106 15:09:02 -- common/autotest_common.sh@1453 -- # return 0 00:32:17.106 15:09:02 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:17.106 15:09:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:17.106 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:32:17.106 15:09:02 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:17.106 15:09:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:17.106 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:32:17.106 15:09:02 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:17.106 15:09:02 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:17.106 15:09:02 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:17.106 15:09:02 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:17.106 15:09:02 -- spdk/autotest.sh@398 -- # hostname 00:32:17.106 15:09:02 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:17.367 geninfo: WARNING: invalid characters removed from testname! 00:32:44.003 15:09:27 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:46.551 15:09:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:48.466 15:09:33 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:50.379 15:09:35 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:52.921 15:09:38 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:55.480 15:09:40 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:57.397 15:09:42 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:57.397 15:09:42 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:57.397 15:09:42 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:57.397 15:09:42 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:57.397 15:09:42 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:57.397 15:09:42 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:57.397 + [[ -n 5030 ]] 00:32:57.397 + sudo kill 5030 00:32:57.408 [Pipeline] } 00:32:57.424 [Pipeline] // timeout 00:32:57.430 [Pipeline] } 00:32:57.446 [Pipeline] // stage 00:32:57.451 [Pipeline] } 00:32:57.466 [Pipeline] // catchError 00:32:57.475 [Pipeline] stage 00:32:57.478 [Pipeline] { (Stop VM) 00:32:57.491 [Pipeline] sh 00:32:57.775 + vagrant halt 00:33:00.319 ==> default: Halting domain... 00:33:06.918 [Pipeline] sh 00:33:07.201 + vagrant destroy -f 00:33:09.744 ==> default: Removing domain... 00:33:10.330 [Pipeline] sh 00:33:10.614 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:10.626 [Pipeline] } 00:33:10.640 [Pipeline] // stage 00:33:10.645 [Pipeline] } 00:33:10.658 [Pipeline] // dir 00:33:10.663 [Pipeline] } 00:33:10.677 [Pipeline] // wrap 00:33:10.683 [Pipeline] } 00:33:10.694 [Pipeline] // catchError 00:33:10.703 [Pipeline] stage 00:33:10.705 [Pipeline] { (Epilogue) 00:33:10.717 [Pipeline] sh 00:33:11.006 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:16.292 [Pipeline] catchError 00:33:16.295 [Pipeline] { 00:33:16.308 [Pipeline] sh 00:33:16.593 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:16.593 Artifacts sizes are good 00:33:16.603 [Pipeline] } 00:33:16.622 [Pipeline] // catchError 00:33:16.638 [Pipeline] archiveArtifacts 00:33:16.652 Archiving artifacts 00:33:16.764 [Pipeline] cleanWs 00:33:16.777 [WS-CLEANUP] Deleting project workspace... 00:33:16.777 [WS-CLEANUP] Deferred wipeout is used... 00:33:16.784 [WS-CLEANUP] done 00:33:16.787 [Pipeline] } 00:33:16.802 [Pipeline] // stage 00:33:16.807 [Pipeline] } 00:33:16.820 [Pipeline] // node 00:33:16.826 [Pipeline] End of Pipeline 00:33:16.861 Finished: SUCCESS